RMP Think-aloud fMRI Dataset

Submitted by YAN Chao-Gan on

RMP Think-aloud fMRI Dataset

 

This dataset was used to investigate the neural representations of self-generated thought during think-aloud fMRI scans (Li, H.X., Lu, B., Wang, Y.W., Li, X.Y., Chen, X., Yan, C.G. (2022). Neural Representations of Self-Generated Thought during Think-aloud fMRI. Neuroimage, 119775, https://doi.org/10.1016/j.neuroimage.2022.119775).

The data was shared through the R-fMRI Maps Project (RMP)All codes are available on git-hub (https://github.com/Chaogan-Yan/PaperScripts/tree/master/LiHX_2022_NeuroImage).

Download Link:

Please use FTP software (Host: lab.rfmri.org, Username: ftpdownload, Password: FTPDownload, Path: /sharing/RfMRIMaps/PaperDataSharing/Li_2022_ThinkAloudfMRIData/ThinkAloudData.zip)

Please sign the Data Use Agreement and email the scanned signed copy to lihx#psych.ac.cn to get unzip password.

Investigators and Affiliations:

Hui-Xian Li1, 2, 3, Chao-Gan Yan. 1, 2, 3, 4

1. CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing 100101, China;

2. Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.

3. International Big-Data Center for Depression Research, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;

4. Magnetic Resonance Imaging Research Center, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;

 

Funding:

National Natural Science Foundation of China (82122035, 81671774 and 81630031);

Key Research Program of the Chinese Academy of Sciences (ZDBS-SSW-JSC006);

Beijing Nova Program of Science and Technology (Z191100001119104);

 

Sample Size:

Total: 86 (45 females; mean age = 22.1 ± 2.7 years).

Exclusion criteria: Any MRI contraindications, current psychiatric or neurological disorders, clinical diagnosis of neurologic trauma, use of psychotropic medication and any history of substance or alcohol abuse.

 

Scan procedures and Parameters:

MRI scanning

Several days prior to scanning, participants were interviewed and informed of the purpose of the study and practiced producing think-aloud verbal reports outside the scanner. All introduction and interview information can be found in the paper.

 

All participants completed a free verbal report stage of the think-aloud fMRI method. They were asked to look at a fixation cross on the screen with their eyes open and stay awake and to report whatever was currently on their minds, without regard to grammar. Besides, a subset of participants (47: 24 females; mean age = 22.9 ± 2.9 years; ThinkAloudfMRIData2) completed an additional 10-minute control condition scan that was a block design with two conditions. They were asked to look at the fixation cross on the screen, and when the “report” cue word appeared, they were instructed to report any thoughts or images that came to mind during the following time. When the cue word “don’t report” appeared, participants did not need to make any verbal report during the subsequent time. The sequence of reporting and non-reporting phases was presented in a pseudo-randomized form. Participants completed 30 blocks in each of the two conditions. The order of the free verbal report stage and control verbal report stage was counterbalanced across participants. We used FOMRI-IIITM (Fiber Optic Microphone for Functional MRI, https://www.optoacoustics.com/medical/fomri-iii/features) which is an advanced adaptive noise canceling microphone used in MRI environments to record participants’ verbal reports. In addition, at the end of each stage, thinking contents and the phenomenology were assessed with a series of items which were derived from a factor analysis (Gorgolewski et al., 2014) regarding self-generated thoughts (item scores ranged from 1 = not at all to 9 = almost all).

 

Image Acquisition:

Images were acquired on 3 Tesla GE MR750 scanners at the Magnetic Resonance Imaging Research Center, Institute of Psychology, Chinese Academy of Sciences with 8-channel head-coils. Before functional image acquisitions, all participants underwent a 3D T1-weighted scan first (192 sagittal slices, TR = 6.65 ms, TE = 2.93 ms, inversion time (IT) = 450ms, field of view (FOV) = 256 × 256 mm2, flip angle (FA) = 12º, slice thickness = 1 mm, voxel size = 1 ´ 1 ´ 1 mm3). After T1 image acquisition, functional images were obtained for the free verbal report stage and the control verbal report stage (37 axial slices, TR = 2000 ms, TE = 30 ms, FA = 90º, FOV = 220 × 220 mm2, matrix = 64 × 64, slice thickness = 3.5 mm, voxel size = 3.5 ´ 3.5 ´ 3.5 mm3).

 

Code availability:

Analysis codes and other behavioral data are openly shared at https://github.com/Chaogan-Yan/PaperScripts/tree/master/LiHX_2022.

 

References:

Li, H.X., Lu, B., Chen, X., Li, X.Y., Castellanos, F.X., Yan, C.G., 2021. Exploring self-generated thoughts in a resting state with natural language processing. Behav Res Methods.

 

Gorgolewski, K.J., Lurie, D., Urchs, S., Kipping, J.A., Craddock, R.C., Milham, M.P., Margulies, D.S., Smallwood, J., 2014. A correspondence between individual differences in the brain's intrinsic functional architecture and the content and form of self-generated thoughts. PloS One 9, e97176.