BETA RELEASE
MIST is currently considered to be in testing, meaning that you will need to inspect its output a bit more carefully to see if you are happy with it. The current release segments a limited number of structures. The full set of structures that FIRST segments, as well as the brainstem nuclei described in [2], will be supported in an upcoming release.
Running MIST
Running MIST is a two-stage process: In the first step, the method is trained to learn about the appearance of a structure in your particular set of images and in the second step, the trained model is used to segment your images. No manual segmentations are required for the training stage.
The current beta release supports these structures:
- Putamen
- Globus pallidus
- Caudate nucleus (including nucleus accumbens)
- Thalamus
Prerequisites
The distributed version of MIST is set up to work with the following types of images:
T1-weighted. This type of image is required, as this is what the initial registration to standard space is based on.
T2-weighted. This includes T2*-weighted and fluid attenuated (FLAIR) scans.
Fractional anisotropy (FA).
MIST assumes that these volumes have been registered, so you will need to do this using FLIRT before running the training stage. To register FA volumes to a T1-weighted volume, you can use mist_FA_reg.
Training
Prior to running the training stage, you will need to tell MIST about the types of images that you have. This is done using a text file called mist_config, which specifies for each modality:
- An arbitrary name for the modality (e.g. ‘T1’, ‘FLAIR’ or ‘Follow-up T1’). This name should be 'T1' for the volume that is to be used for standard space registration.
- The type of the modality: One of T1, T2 or FA (see above).
- The filename relative to the subject directory.
- The original resolution of the scan. This cannot be determined from the input images as these will in general already have been resampled. This is a scalar value, so for anisotropic scans you will need to approximate this – a rough approximation is fine.
An example mist_filenames file might read:
”T1”,”T1”,”structural”,1.0 ”FLAIR”,”T2”,”FLAIR”,1.5 ”T2*”,”T2”,”FLASH”,1.0
This specifies that for each subject, MIST can expect to find a T1-weighted scan called structural.nii.gz and two ’T2-weighted’ scans called FLAIR.nii.gz and FLASH.nii.gz. The automatic configuration can handle the contrast differences between the two 'T2' scans.
A second configuration file called mist_subjects tells MIST the names of the subject folders that contain these images, for example:
/home/xyz/study/subject001 /home/xyz/study/subject002 ...
In this example both the mist_filenames and mist_subjects files would be located in /home/xyz/study. If you have many subjects, you can use a subset for training by creating an optional similar file called mist_training_subjects – this should contain a representative subset of the directories that are to be used for training.
There are no hard rules for the number of training subjects required. For studies with fewer than 50 subjects, we we would normally recommend using all data for training. With more than 100, improvements will most likely be minimal and taking a subset is recommended to reduce runtime.
As a final check, your files should now be organised in the following way (for this example):
/home/xyz/study/mist_filenames /home/xyz/study/mist_subjects /home/xyz/study/subject001/structural.nii.gz /home/xyz/study/subject001/FLAIR.nii.gz /home/xyz/study/subject001/FLASH.nii.gz /home/xyz/study/subject002/structural.nii.gz /home/xyz/study/subject002/FLAIR.nii.gz /home/xyz/study/subject002/FLASH.nii.gz ...
You are now ready to run training. This is done by running mist_1_train from within the /home/xyz/study directory. Note that this may take a long time if fsl_sub cannot submit to a cluster (hours to days, depending on image resolution and the number of training subjects).
There is no requirement for the mist_filenames and mist_subjects files to be located within the directory structure containing the data. When using another location, cd into that directory and call mist_1_train from there. In this case, it is important that the paths in mist_subjects are absolute or relative to this directory.
If you are only interested in a single structure (or a subset of structures), you can speed up training by specifying any of putamen, pallidum, caudate_accumbens and thalamus on the commmand line. For example, to only train the putamen and thalamus models, run
mist_1_train putamen thalamus
Segmenting images
When training is complete, the mist_out folder should contain a number of files called mist_autosetup_<structure>.txt. To use these to segment your images, run mist_2_fit from the top-level directory for your study (e.g. /home/xyz/study).
When the segmentation step is finished, each of your subject folders should contain the following files for each structure (among other output files):
mist_autosetup_<structure>_mask.nii.gz. The final segmentation as a voxel-based mask.
mist_autosetup_<structure>_shape.vtk. A mesh representing the final segmentation. This can be used for shape analysis (see below).
mist_autosetup_<structure>_shape_reg.vtk. The same mesh after affine registration to MNI coordinates. This can also be used for shape analysis.
In addition, the file mist_nonoverlapping.nii.gz contains the segmentations of all structures in a single 3D volume. Voxels in which segmentations overlap have been assigned to the structure for which they were most interior in this file.
Volume statistics
The volume of a mesh can be found using:
mist_mesh_utils volume <subject>/mist_autosetup_<structure>_shape.vtk
The obtained volumes can be analysed using any standard statistics package. You may want to take head size into account in your analysis.
Shape statistics
After running mist_2_fit, these files will have been created in the mist_out directory for each structure:
<structure>_distances_native.csv. A table containing the distances of the original (native-space) segmentations to the reference mesh after rigid body registration. These distances include brain size related scaling; you may want to account for these in your analysis.
<structure>_distances_mni.csv. A similar table obtained by first applying the affine transformation to MNI space and then doing a 3 DOF (translation only) registration. This table does not include brain size-related scaling, as such scaling will have been accounted for by the affine transformation.
These tables can be used to perform shape analysis using PALM. To perform inference using 2D TFCE, use the options -T -tfce2D and specify the appropriate reference mesh using the -s option (use the .gii file). For example, use the commands
design_ttest2 design 20 20 palm -i left_putamen_distances_mni.csv -s $FSLDIR/data/meshes/left_putamen.gii -d design.mat -t design.com -T -tfce2D -o palm_left_putamen
to do an unpaired t-test with 20 subjects in each group. See GLM for more information on how to set up your statistical model.
Visualisation
A simple 3D visualisation of results can be obtained using a command such as
mist_display pvals $FSLDIR/data/meshes/left_putamen.gii palm_left_putamen_tfce_tstat_fwep_c1.csv palm_left_putamen_tfce_tstat_fwep_c2.csv
where the first contrast is a positive effect and the second contrast the corresponding negative effect. Thresholding is fixed at p=0.05 for a single contrast or p=0.025 for two contrasts (i.e. a two-sided test). You can use the mouse to rotate and zoom.