Setup: How do I start a new project? What data do I need?

If you have raw functional/anatomical data (not preprocessed), start by launching the conn toolbox and clicking Setup->New, respond ‘Yes’ to the question of whether spatially preprocessed the data now, and follow the steps to select your functional/anatomical (and modify the default pre-processing options to fit your goals if needed).

If you have preprocessed functional/anatomical data (realigned, coregistered, normalized, and optionally smoothed), start by launching the conn toolbox and filling in the fields on each of the tabs of the Setup section (Basic, Functional, Structural, etc.) note: in the ‘Functional’ tab, select the smoothed, normalized functional volumes, and in the ‘Structural’ tab, select normalized structural volumes as well; optionally in the ‘ROIs’ tab, select the normalized grey/white/csf volumes (these are typically created during normalization of the structural volume for each subject) for the corresponding ROIs.

If you have functional/anatomical data that has been previously analyzed in SPM, you can start by launching the conn toolbox, and clicking Setup->Import, selecting the number of subjects, and entering the corresponding SPM.mat files (one per subject). This will directly import all of the experimental information into the conn toolbox (conditions and optional first-level covariates), as well as the source of functional volumes and realignment parameter files.

You can also entirely skip the gui and define all of the necessary information through scripts. See the conn toolbox batch manual for additional information.

Setup: What are first-level and second-level covariates?

First-level covariates are within-subject timeseries that you typically want to regress out of the BOLD timeseries (e.g. subject movement parameters). Second-level covariates are between-subject variables that characterize your subjects/groups (e.g. dummy-coded group variables, or other subject descriptors such as IQ, age, gender, etc.) Any variables entered as first-level covariates will be later (after running the ‘Setup’ step) available in the ‘Preprocessing’ step for you to add them as potential confounds (this will regress out their effect from the BOLD signal before computing connectivity measures). Variables defining second-level covariates can be added at any time and they will be directly available in the ‘second-level results’ tab (no need to re-run any of the intermediate steps).

Preprocessing: What variables should I add to the ‘Confounds’ list? How is the aCompCor method implemented in the toolbox?

By default the toolbox implements the aCompCor strategy (Behzadi et al. 2007) for removal of confounding effects from the BOLD timeseries before computing connectivity measures. This is implemented by populating the Confound list in the ‘Preprocessing’ tab with: 1) White matter and CSF effects (characterized by 3 dimensions each, representing the variability of BOLD signal timeseries observed within those areas); 2) Main session- or task- effects (for task-related analyses), and their first temporal derivatives; and 3) realignment parameters and other first-level covariates. You can modify the entries in this list through the GUI or through the batch commands if you wish to include a different set of potential confounder variables. The toolbox will regress out from the BOLD timeseries (for each ROI and/or for each voxel) all of the effects listed in the ‘Confounds’ list, before computation of any connectivity measures.

Preprocessing: What are the histogram-looking plots in the Preprocessing step? What they should look like?

These plots represent the (sample) distribution of voxel-to-voxel measures before removal of potential confounding variables (labeled ‘original’), and after removal of these effects (labeled ‘without confound’). Typically you would expect to see a somewhat biased (shifted to the right) and wide distribution of connectivity values before preprocessing (e.g. depending on the amount/strength of subject movement which introduces artifactual positive correlations between distant voxels), and a somewhat centered and narrower distribution after removal of confounding effects. If the ‘without confound’ distribution looks still biased you might want to increase the dimension of white/csf confounds, and/or explore additional potential sources of variability (e.g. potential time-series confounds, see the art_detect toolbox in http://www.nitrc.org/projects/artifact_detect/; this will create additional first-level covariates that you can enter in the conn toolbox to effectively remove a set of outlier scans from consideration). If the ‘without confounds’ distribution looks wider than the ‘original’ distribution, this could indicate too few degrees of freedom, you could try to remove some confounds (e.g. decrease the ‘derivatives order’ to 0 of the realignment confound) to increase the dofs of the connectivity analyses.

Analyses: What are the sources in the first-level analysis step?

Sources typically list any ROI that you want to include as seeds for seed-to-voxel analyses, or any ROI that you want to include in your ROI-to-ROI analyses. Seed-to-voxel connectivity maps will be created for each subject and for each source ROI (connectivity values between the seed and every voxel of the brain), and ROI-to-ROI connectivity matrices will be computed for each subject and for each source ROI (connectivity values between the source ROI and every ROI defined in the setup step)

While typically sources are defined using the default ‘derivatives order’=0 and ‘dimensions=1’ values, you can also modify these values to create multiple sources from each ROI (higher derivative terms will include additional sources where the timeseries of interest characterize the temporal derivatives of the BOLD timeseries within each ROI, and higher dimension termswill include additional sources where the timeseries of interest characterize the eigenvariates of the BOLD timeseries covariance within each ROI)

Analyses: What are the different connectivity measures in the first-level analysis step? Should I use correlation or regression measures?

There are four connectivity measures that can be computed by the toolbox: bivariate correlation, semipartial correlation, bivariate regression, and multiple regression measures. Most people seem to be using the simpler bivariate correlations (as a measure of 'total' functional connectivity between two areas). Semi-partial correlations are used when you want to obtain instead the 'unique' contribution of a given source on a target area (controlling for the contributions of other additional source areas), this is useful for example when studying in more detail potential paths underlying the functional connectivity between two areas. Bivariate and multiple regression measures are equivalent to bivariate and semi-partial correlation measures, but their units instead represent 'effective change' (percent signal change in target area associated with each percent signal change in source area; something closer to 'effective' connectivity). These measures are useful for example when one is concerned about potential differences in BOLD signal variance driving the connectivity/correlation results (regression measures are not biased by differences in variance between conditions/populations, while correlation measures can be (e.g. Friston, 2011).

They represent Fisher-transformed correlation coefficient values, i.e. atanh(r), where r is the correlation coefficient between the source area and target area (voxels or regions)

Analyses: What are voxel-to-voxel analyses? What do the measures listed in the first-level analyses->voxel-to-voxel tab represent?

Voxel-to-voxel analyses are a new addition available from conn toolbox v.13. They represent subject-level measures that are derived from the full matrix of voxel-to-voxel correlation values. Currently the analyses include connectome-MVPA (multivariate pattern analysis of whole-brain connectome; Nieto-Castanon et al. in preparation) as well as several indexes characterizing specific aspects of the pattern of connectivity between each voxel and the rest of the brain: Integrated Local Correlation (ILC, Deshpande et al. 2007) characterizing the average local connectivity between each voxel and its neighbors; Radial Correlation Contrast (RCC, Goelman, 2004) characterizing the spatial asymmetry of the local connectivity pattern between each voxel and its neighbors; Intrinsic Connectivity Contrast (ICC, Martuzzi et al. 2011) characterizing the strength of the global connectivity pattern between each voxel and the rest of the brain; and Radial Similarity Contrast characterizing the global similarity (Kim et al. 2010) between the connectivity patterns of neighboring voxels. Each of these measures can be used to explore whole-brain connectivity differences among subjects/conditions without having to restrict the analyses to one or several a priori seeds/ROIs. For example, if you want to explore gender-related connectivity differences, you may use connectome-MVPA analyses to build a multivariate representation of the connectome at each voxel and for each subject, and then enter these components in a second-level multivariate analyses exploring across-subject differences between these components associated with gender. Regions that are significant in the resulting voxel-level analyses indicate gender-related differences in connectivity between those areas and the rest of the brain. You may then perform post hoc analyses using each of these areas as seeds to characterize what specific aspects of the connectivity between these areas and the rest of the brain differ between gender.

Results: How to specify an AN(C)OVA model across subjects?

In the conn toolbox second-level analyses are implemented using a general linear model (GLM), which encompasses anova as well as linear regression models. In the Setup->second-level covariates tab you can define as many subject-effects as you wish. Then in the ‘between-subjects effect’ list in the Results tab you can simply select a subset of these effects to be included in each general linear model. For example, for a study with two subject groups (e.g. patients and controls) you could define two effects by dummy coding these subject groups (e.g. the patients effect contains 1’s for those subjects that belong to this group, and 0’s for the rest, and the control effect contains 1’s for those subject that belong to the control group and 0’s for the patients; you could also define one additional effect named all that includes all of the subjects). You could also include additional covariates (e.g. performance; you would typically enter one covariate named performance that contains the performance for each subject, plus two additional covariates named performance_patients and performance_controls that contain the performance values within one group and 0’s for the opposite group subjects). Then, in the results tab you could define the following models/contrasts:

1)To compare the connectivity results of patients vs. controls (disregarding the effect of other covariates) select ‘patients’ and ‘controls’ in the subject-effects list, and enter [1,-1] in the between-subject contrast field (this is equivalent to a two-sample t-test)

2)To perform the same comparison, but now controlling for potential between-group differences in performance, select ‘patients’, ‘controls’, and ‘performance’ in the subject-effects list, an enter [1,-1,0] in the between-subject contrast field (this is equivalent to a one-way ancova)

3)To perform the same comparison, but now additionally controlling for potential group*performance interactions (e.g. differences between groups in the association between performance on connectivity), select ‘patients’, ‘controls’, ‘performance_patients’, and ‘performance_controls’ in the subject-effects list, and enter [1,-1,0,0] in the between-subject contrast field (this is equivalent to a one-way ancova model with covariate interactions included) note: in the presence of an interaction, main effects need careful interpretation, you typically will want to center the performance values to a common baseline level where you wish the between-group effect to be evaluated

4)In the same interaction model above, if you wish to test group*performance interactions you would select the same subject effects, and enter [0,0,1,-1] in the between-subject contrast field

As an additional note, when defining a second-level general linear model the conn toolbox will evaluate whether you wish to include all of the analyzed subjects or not in each particular analysis by exploring the selected subject-effects and evaluating whether any subject(s) has a zero value in all of the selected effects. Subjects that contain 0’s in all of the selected effects are disregarded from this particular analysis. This allows you, for example, to select simply the ‘patients’ effect in the subject-effects list (and enter simply a 1 in the between-subject contrast), and evaluate the level of connectivity (one-sample t-test) within the patients group only (not including the control group).

Results: How to specify a regression model across subjects?

Following the same experiment example above (Ancova model), you could define the following models/contrasts:

1)To evaluate the association between performance and connectivity across all subjects, you would select the ‘all’ and ‘performance’ effects in the subject-effects list, and enter [0,1] in the between-subject contrast field (this is equivalent to a bivariate regression/correlation test)

2)To evaluate the association between performance and connectivity within one group (e.g. patients), you would select ‘patients’ and ‘patients_performance’ in the subject-effects list, and enter [0,1] in the between-subject contrast field

3)To evaluate between-group differences in this association see the example (4) in the Ancova model (group*performance interaction)

You could also have additional regressors (e.g. IQ) and wish to evaluate the unique contribution of each of these effects. For this you could, for example, define:

4)To evaluate the unique association between performance and connectivity when controlling by IQ across all subjects, you would select the ‘all’, ‘performance’, and ‘IQ’ effects in the subject-effects list, and enter [0,1,0] in the between-subject contrast field (this is equivalent to a multiple regression between IQ and performance, and connectivity strength). You can also use F-tests to evaluate the contribution of any of several effects. For example, you could select ‘all’, ‘performance’, and ‘IQ’ effects in the subject-effects list, and enter [0,1,0; 0,0,1] in the between-subject contrast field to evaluate those areas associate with either IQ or performance.

Results: Can I test mixed within- between- subject models?

Yes. You could have two within-subject conditions (e.g. task and rest in a block design), and (if these conditions are defined in the conditions tab during the setup step) the toolbox will compute the connectivity values for each subject and for each of these conditions. Then in the results tab you can simply select both ‘task’ and ‘rest’ in the conditions list, and enter a [1,-1] in the between-conditions contrast field to evaluate the within-subject difference in connectivity between task- and rest. For example, and following with the experimental example above (ancova/regression models) you could define the following models/contrasts:

1)To evaluate the difference in connectivity between task and rest across all subjects, select ‘all’ in the subject-effects list, select ‘task’ and ‘rest’ in the conditions list, and enter [1,-1] in the between-conditions contrast field (this is equivalent to a paired t-test)

2)To evaluate possible condition*group interactions (e.g. modulation of task vs. rest connectivity differences across groups), select ‘patients’ and ‘controls’ in the subject-effects list and enter [1,-1] in the between-subject contrast field, and select ‘task’ and ‘rest’ in the conditions list, and enter [1,-1] in the between-conditions contrast field (this is equivalent to a repeated measures anova model)

Another potential source of within-subject effects is multiple seeds/rois. For example, you might wish to evaluate how similar/different the connectivity patterns of two different seeds are. For this you would simply select the two ROIs in the sources list, and enter [1,-1] in the between-sources contrast field. Of course you can combine this with any between-subject effects/contrast as well as between-condition effects/contrast (including multiple contrasts entered as multiple rows in any of the within- or between- subject contrast fields) to evaluate more complex models as well.

Results: Can I evaluate F-contrast within the conn toolbox second-level analyses?

This feature has been implemented in version 13f. F-contrasts in voxel-level analyses are implemented as repeated-measures analyses using ReML estimation of covariance components and evaluated through F-statistical parameter maps. F-contrasts in ROI-level analyses are implemented as multivariate analyses and evaluated through F- or Wilks lambda statistics depending on the dimensionality of the within- and between- subjects contrasts.

Results: How do I correct for 'multiple-comparisons' the second-level results?

For seed-to-voxel or voxel-to-voxel analyses, typically the analysis results are considered appropriately corrected for multiple comparisons (across all brain/analysis voxels) as long as at least one of either the height (voxel-level) or the extent (cluster- or peak- level) thresholds uses an analysis-wise false positive control method (either FDR- or FWE- corrected p-values).

Similarly, for ROI-to-ROI analyses, typically the analysis results are considered appropriately corrected for multiple comparisons (across all seeds/ROIs) as long as at least one of either the height (connection-level) or the extent (seed- or network- level) thresholds uses an analysis-wise false positive control method (either FDR- or FWE- corrected p-values).

Note, nevertheless, that there is some discussion as to what constitutes an 'appropriate' correction. Many of the discussion boils down to making sure that your statistical inferences always pertain to the analysis units corresponding to the one threshold that is using an analysis-wise false positive control method (e.g. do not make inferences about individual clusters of activation if you are correcting using voxel-level FDR correction; do not make inferences about individual ROI-to-ROI connections if you are using seed-level FWE correction; etc.)

Results: I want to enter the first-level connectivity maps into SPM or other toolbox for additional second-level analyses. Where can I find the appropriate first-level maps?

contain the connectivity maps for each subject/condition/source (e.g. fisher transformed correlation values if using ‘bivariate correlation’ connectivity measures). The file _list_conditions.txt in the same folder will tell you the association between condition numbers (in the filenames) and condition names (as defined in the conn project), and the file _list_sources.txt in the same folder will tell you the association between source numbers and source names.

Results: Can I get from the toolbox the correlation ROI-to-ROI matrix for a single subject?

This file will contain a matrix Z with the ROI-to-ROI connectivity values (Fisher-transformed correlation coefficients). In particular the value Z(i,j) will contain the connectivity between source ROI 'i' and target ROI 'j'. The names of these ROIs can be read from the variables 'names' and 'names2', respectively, from the same .mat file. In other words, names{i} is the source ROI and names{j} is the target ROI corresponding to the Z(i,j) value. Note that source ROIs are all of the ROIs that you entered as sources in the first-level analysis step (which are typically a subset of all of the ROIs entered in the original Setup step). In contrast target ROIs are all of the ROIs entered in the Setup step. The ROIs are sorted so that the square matrix Z(:,1:size(Z,1)) will contain the connectivity among the source ROIs only.

In addition, the file resultsROI_Condition*.mat in the same folder will contain the same values concatenated across all subjects (Z will be now a 3-dimensional matrix -ROIs by ROIs by subjects- of connectivity values).

Results: What are NBS, connection-, seed-, and network- level statistics in ROI-to-ROI analyses, and how to use them

NBS stands for Network Based Statistics, a method proposed by Zalesky et al. (2010) that controls for family-wise error (FWE) in mass-univariate testing of a full ROI-to-ROI connectivity matrix. It is a non-parametric approach based on permutation tests that looks at the extent of interconnected effects when analyzing the ROI-to-ROI connectivity matrix. Conceptually it is similar to the ‘cluster-level’ FWE-control in voxel-level analyses, where by combining a voxel-level threshold with cluster-based statistics you can make inferences about the extent of specific suprathreshold clusters, typically gaining considerable power compared to individual voxel-level inferences. In the case of functional connectivity analyses, by combining a connection-level threshold with network-based statistics, you can make inferences about the extent of specific subnetworks of connected ROIs, similarly gaining considerable power compared to individual connection-level inferences.

When using the ‘explore ROI-to-ROI results’ gui, if you select a single ROI as seed the toolbox will analyze the connectivity between this seed ROI and all other ROIs, and it will offer the following measures and statistics:

a) connection-level statistics. These represent the individual tests between the chosen seed ROI and each of the other ROIs. Only those connections above the chosen ‘connection-level’ threshold are displayed or shown in the results table.

b) seed-level (F-test) statistics. This uses a multivariate test to jointly evaluate whether the connectivity between this seed and any other ROIs show any significant effect of interest

c) seed-level (NBS) ‘size’ and ‘intensity’ measures. These measures represent alternative ways to jointly evaluate whether the connectivity between the chosen seed ROI and all other ROIs shows any significant effect of interest. Specifically these measures represent the number of suprathrehsold connections between this seed and all other ROIs (above the defined connection-level threshold), and their overall strength (sum of absolute T-values over these suprathreshold conections), respectively. Enabling permutation tests will also tell you the statistics (p-values) associated with these two measures. These measures as well as their associated statistics will naturally vary depending on your choice of connection-level threshold. Typically, lower (more conservative) connection-level p-values are more sensitive to strong local effects (e.g. strong effects between this seed and a few other ROIs), while higher (more liberal) connection-level p-values are more sensitivity to weaker distributed effects (e.g. weaker effects between this seed and a lot of ROIs). Statistics based on ‘intensity’ are often considered more powerful than those based on ‘size’, and this is typically most apparent when using relatively low (more liberal) connection-level p-values.

In terms of multiple-comparison corrections, since in this case we are interested in the connectivity between a single seed ROI and all other ROIs, options (b) and (c) do not require additional control (uncorrected p-values are the proper way to evaluate ‘significance’ in this case; note that in option (c) you may choose uncorrected or FDR-corrected ‘connection-level’ thresholds, this only affects sensitivity but not the validity of the results), while option (a) requires some additional level of false positive control if one wishes to perform inferences about the individual connections (e.g. p-FDR ‘connection-level’ thresholds will be the proper way to evaluate ‘significant’ effects in this case)

When you choose more than one seed ROIs (or when choosing ‘Select all’ to explore the entire connectome), we will be testing multiple seeds simultaneously so there are additional multiple-comparison corrections needed in order to perform similarly valid inferences. In case (a), when you wish to obtain inferences about which specific ROI-to-ROI connections show significant effects of interest, p-FDR (analysis level) connection-level threshold is the appropriate way to evaluate ‘significance’ in this case while correcting for multiple-comparisons, although in terms of sensitivity this will often offer very little power to detect individual connections unless they show very strong or very widely occurring effects. In cases (b) and (c), when you wish to obtain inferences about which ROIs show significant effects (jointly evaluate whether the connectivity between each seed ROI and all other ROIs shows any significant effect of interest), p-FDR (false discovery rate) or p-FWE (family-wise error) seed-level thresholds are the appropriate ways to evaluate ‘significance’ in this case while correcting for multiple-comparisons across the multiple seed ROIs chosen. FDR correction is applied over all of the chosen seed ROIs (which may be lower than all of the ROIs if you selected only a few seed ROIs instead of the ‘Select all’ option), while FWE correction is always applied over the entire connectivity matrix (irrespectively of the number of seed ROIs chosen). As before, ‘seed (F-test)’ statistics are independent of the chosen connection-level threshold, while ‘seed (NBS)’ statistics will vary depending on the choice of connection-level threshold (the same recommendations as above apply regarding the sensitivity of these methods).

In addition to these seed- and connection- level thresholding options, when choosing multiple seed ROIs you may also select any of the ‘network (NBS)’ thresholding options in order to perform inferences about specific networks of interconnected effects, which you can obtain through the following additional measures:

d) network-level (NBS) ‘size’ and ‘intensity’ measures. These measures represent the number of suprathreshold connections (above the defined connection-level threshold), and their overall strength (sum of absolute T-values over these suprathreshold conections), respectively, across individual subnetworks of interconnected ROIs (two ROIs are considered ‘connected’ if they show a suprathreshold connection-level effect between them). Enabling permutation tests will also tell you the statistics (p-values) associated with these two measures. Unlike seed-level statistics, which allow you to perform inferences regarding specific ROIs, these network-level statistics represent a way to jointly evaluate whether the connectivity within specific connected subnetworks of ROIs shows any significant effect of interest, so the resulting inferences pertain to these specific subnetworks of ROIs (not about the individual ROIs or connections comprising these networks). As before, these measures and their associated statistics will vary depending on the choice of connection-level threshold (the same recommendations as above apply regarding the sensitivity of these methods). In terms of multiple-comparison corrections, p-FWE network-level thresholds are the appropriate way to evaluate ‘significance’ in this case (to correct for possibly multiple unconnected networks across the entire connectome). Note: when using network (NBS) thresholds, if you are using the default 'connectome' display type for visualizing the results, it is often useful to right-click on the results figure and select the ‘minimum degree’ ROI ordering option in order to more easily visualize the different sets of interconnected networks.

In terms of sensitivity (power) when analyzing the entire connectome, option (d) will often offer the highest sensitivity to detect effects of interest, followed by options (b) and (c), and with option (a) being typically the least sensitive. Of course this additional power comes at the cost of the spatial specificity of the resulting inferences. Option (a) allows highly spatially-specific inferences -about individual ROI-to-ROI connections-, option (d) offers comparatively lower spatial specificity (making inferences about the connectivity within networks of ROIs), with options (b) and (c) offering a somewhat intermediate level of spatial specificity (making inferences about the connectivity of individual ROIs).

Results: What do the ‘graph-theory’ measures (local/global efficiency and cost) represent? Could you recommend a paper or tutorial?

For each node n in a graph G, degree and cost is are defined as the number and proportion, respectively, of connected neighbors, ; average path length and global efficiency areis defined as the average shortest-path distance, and the average inverse shortest-path distance, respectively, from node n to all other nodes in the graph, ; and clustering coefficient and local efficiency areis defined as the proportion of connected nodes and the average global efficiency, respectively, across all nodes in the local sub-graph of node n (the sub-graph consisting only of nodes neighboring node n); and betweenness centrality is defined as the proportion of all shortest-paths in the network containing a given node.

I would probably recommend reading Latora 2001 manuscript for a description and rationale of these measures, as well as Achard 2007 for an example of use of these measures.

Other: Do any of the files output by the toolbox contain the filtered fMRI data (i.e. voxel-wise time series data after removal of confounds and bandpass filtering)?

Yes, the DATA_Subject*_Condition*.matc files in the /results/preprocessing/ folder contain this info (the voxel-wise time-series after removal of confounds and band-pass filtering). The .matc files are stored in a slightly different format than the .nifti files (for faster access by the toolbox gui). You can use the conn_matc2nii.m script to convert these files to .nii format. After you load the conn project, and assuming you have already run the setup step, simply run conn_matc2nii from the command line and this will write out the preprocessed data as nifti volumes. The preprocessed volumes will be located at the conn_*/preprocessing/ folder, and they are namedDATA_Subject###_Condition###.nii. In version 13 or above you may also simply select create confound corrected time series checkbox in the Setup->Options tab to have the toolbox generate these .nii files automatically during the preprocessing step.

Other: Are coordinates reported in Talairach or MNI space?

All coordinates are MNI (assuming you used the SPM normalization templates). Note: the .tal extention for text-based ROI definition files is a misnomer that we kept for back-compatibility.

Other: Are the FDR corrections in the 'seed-to-voxel explorer' done the older SPM5 way? i.e. not "peak" or "cluster"?

In versions 13e and below voxel-level FDR correction is done the old SPM5 way (at the level of voxels, not peaks), while cluster-level FDR correction is done the SPM8 way (at the level of clusters). In versions 13f and above we also added peak-level FDR-correction (the "SPM8 way", topological FDR, Chumbley et al.) and allow analyses to be thresholded based on this feature.

Other: I have to move my imaging data files from one drive to another; is there a way to edit the CONN toolbox to change the directory paths for subject image files and/or already computed analysis files?

If you move the connectivity project (the conn_*.mat file, and the associated conn_* folder) to a different drive, everything should work directly without any problems. Simply load again the connectivity project (from the new location) in the toolbox gui and the toolbox will automatically correct the file references. If what you are moving to a different drive are the "subjects data" (meaning the original functional or anatomical volumes, or the ROI files, i.e. all of the information that you specify in the 'Setup' step, not the files that are created by the conn toolbox), everything should also work directly without any problems as long as the new location maintains the same folder structure as the original (e.g. your subject functional data is all below a given c:/someplace/subjectdata/* directory and you move this to a different directory d:/someotherplace/subjectdata/*). When you load the project in the conn gui, it automatically checks whether all these files are still in the same location, and if they are not the gui asks you to locate the first 'mismatched' file. Once you specify the new location of this first file (and as long as the new folder structure is the same as the old one), the rest of the mismatched references are automatically corrected so the gui should not need to ask you to locate any additional files. As before remember to save the project after this correction has been performed to save these changes and avoid being prompted again.

This error indicates that some of the ROIs have not been properly defined for all subjects. The most common cause for this error is when somebody adds one ROI but only specifies the ROI file for the first subject. When specifying the ROI file(s) associated with a given ROI you can: a) (for subject-independent ROIs) select all of the subjects in the 'Subjects' list, and then click on an ROI file in the 'Select ROI definition files' tab to assign the same ROI file to all subjects; or b) (for subject-specific ROIs) select sequentially each subject in the 'subjects' list and a different ROI file in the 'Select ROI definition files' to assign a different ROI file for each subject.

Troubleshooting: Error using ==> fwrite. Invalid byte count to skip.

A few possible sources that could cause this error are: 1) if the structural and functional volumes that you are entering into the conn toolbox are not correctly coregistered (this would only happen if you are skipping the spatial-preprocessing steps in the toolbox); 2) if SPM's segmentation step failed for some reason (and you are getting empty CSF or white matter masks); or 3) if you have defined at least one condition that is not associated with any scan (e.g. if you enter 0 in the 'duration' field).

I have not yet been able to replicate this error, but previous users were able to fix this by starting spm (type: spm fmri) *before* starting the conn toolbox (I believe the error was related to some missing spm folders in the matlab path).

This would typically be a problem with folder write permissions, but also it can be associated with a limit in the number of files per folder in FAT formatted drives (using NTFS format avoids this problem)

Troubleshooting: Error in ==> conn_menumanager

This error typically occurs if you issue a ‘clear all’ form the command line (while having the conn toolbox open)

Troubleshooting: I have found an error on one subject, fixed it, and want to continue the analyses without redoing the analyses already-performed for the previous subjects. How can I do this?

Simply click "Done" again in the corresponding step where the process was interrupted, and, when prompted, answer "No" to the question "Overwrite existing subjects results?" (you might be prompted several times, one for each sort of analysis performed by the toolbox during this step). This will skip the steps/subjects that have already been performed and it will continue with the rest of the steps/subjects. The same procedure can be done when you add new subjects or new ROI(s) to an already analyzed project.

General: How to reference the conn toolbox?

Please reference the toolbox using the Whitfield-Gabrieli and Nieto-Castanon (2012) reference below and/or including a link to http://www.nitrc.org/projects/conn in your manuscript.

Here you will find a non-exhaustive list of recent articles of conn toolbox users. Please let us know when your article is published and we will be glad to add your manuscript to this list.