Contents
- Introduction
- User Guide
- FAQ
FSL Tools used
This section lists the generic FSL programs that SIENA uses. bet - Brain Extraction Tool. This automatically removes all non-brain tissue from the image. It can optionally output the binary brain mask that was derived during this process, and output an estimate of the external surface of the skull, for use as a scaling constraint in later registration.
pairreg, a script supplied with FLIRT - FMRIB's Linear Image Registration Tool. This script calls FLIRT with a special optimisation schedule, to register two brain images whilst at the same time using two skull images to hold the scaling constant (in case the brain has shrunk over time, or the scanner calibration has changed). The script first calls FLIRT to register the brains as fully as possible. This registration is then applied to the skull images, but only the scaling and skew are allowed to change. This is then applied to the brain images, and a final pass optimally rotates and translates the brains to get the best final registration.
fast - FMRIB's Automated Segmentation Tool. This program automatically segments a brain-only image into different tissue types (normally background, grey matter, white matter, CSF and other). It also corrects for bias field. It is used in various ways in the SIENA scripts. Note that both siena and sienax allow you to choose between segmentation of grey matter and white matter as separate classes or a single class. It is important to choose the right option here, depending on whether there is or is not reasonable grey-white contrast in the image.
SIENA - Two-Time-Point Estimation
Usage
A default SIENA analysis is run by typing:
siena <input1> <input2>
The input filenames must not contain directory names - i.e. all must be done within a single directory.
Other options are:
-o <output-dir> : set output directory (the default output is <input1>_to_<input2>_siena)
-d : debug (don't delete intermediate files)
-B "bet options" : if you want to change the BET defaults, put BET options inside double-quotes after using the -B flag. For example, to increase the size of brain estimation, use: -B "-f 0.3"
-2 : two-class segmentation (don't segment grey and white matter separately) - use this if there is poor grey/white contrast
-t2: tell FAST that the input images are T2-weighted and not T1
-m : use standard-space masking as well as BET (e.g. if it is proving hard to get reliable brain segmentation from BET, for example if eyes are hard to segment out) - register to standard space in order to use a pre-defined standard-space brain mask
-t <t>: ignore from t (mm) upwards in MNI152/Talairach space - if you need to ignore the top part of the head (e.g. if some subjects have the top missing and you need consistency across subjects)
-b <b>: ignore from b (mm) downwards in MNI152/Talairach space; b should probably be -ve
-S "siena_diff options" : if you want to send options to the siena_diff program (that estimates change between two aligned images), put these options in double-quotes after the -S flag. For example, to tell siena_diff to run FAST segmentation with an increased number of iterations, use -S "-s -i 20"
-V : run ventricle analysis VIENA
-v <mask image> : optional user-supplied ventricle mask (default is $FSLDIR/bin/MNI152_T1_2mm_VentricleMask)
What the script does
siena carries out the following steps:
Run bet on the two input images, producing as output, for each input: extracted brain, binary brain mask and skull image. If you need to call BET with a different threshold than the default of 0.5, use -f <threshold>.
Run siena_flirt, a separate script, to register the two brain images. This first calls the FLIRT-based registration script pairreg (which uses the brain and skull images to carry out constrained registration). It then deconstructs the final transform into two half-way transforms which take the two brain images into a space halfway between the two, so that they both suffer the same amount of interpolation-related blurring. Finally the script produces a multi-slice gif picture showing the registration quality, with one transformed image as the background and edges from the other transformed image superimposed in red.
The final step is to carry out change analysis on the registered brain images. This is done using the program siena_diff. (In order to improve slightly the accuracy of the siena_diff program, a self-calibration script siena_cal, described later, is run before this.) siena_diff carries out the following steps:
- Transforms original whole head images and brain masks for each time point into the space halfway between them, using the two halfway transforms previously generated.
- Combines the two aligned masks using logical OR (if either is 1 then the output is 1).
- The combined mask is used to mask the two aligned head images, resulting in aligned brain images.
The change between the two aligned brain images is now estimated, using the following method (note that options given to the siena script are passed on to siena_diff): Apply tissue segmentation to the first brain image. At all points which are reported as boundaries between brain and non-brain (including internal brain-CSF boundaries), compute the distance that the brain surface has moved between the two time points. This motion of the brain edge (perpendicular to the local edge) is calulated on the basis of sub-voxel correlation (matching) of two 1D vectors; these are taken from the 3D images, a fixed distance either side of the surface point, and perpendicular to it, and are differentiated before correlation, allowing some variation in the two original images. Compute mean perpendicular surface motion and convert to PBVC.
To make this conversion between mean perpendicular edge motion and PBVC, it is necessary to assume a certain relationship between real brain surface area, number of estimated edge points and real brain volume. This number can be estimated for general images, but will vary according to slice thickness, image sequence type, etc, causing small scaling errors in the final PBVC. In order to correct for this, self-calibration is applied, in which siena calls siena_cal. This script runs siena_diff on one of the input images relative to a scaled version of itself, with the scaling pre-determined (and therefore known). Thus the final PBVC is known in advance and the estimated value can be compared with this to get a correction factor for the current image. This is done for both input images and the average taken, to give a correction factor to be fed into siena_diff.
The files created in the SIENA output directory are:
report.siena the SIENA log, including the final PBVC estimate.
report.html a webpage report including images showing various stages of the analysis, the final result and a description of the SIENA method.
A_halfwayto_B_render a colour-rendered image of edge motion superimposed on the halfway A image. Red-yellow means brain volume increase and Blue means brain volume decrease ("atrophy").
A_and_B.gif a gif image showing the results of the registration, using one transformed image as the background and the other as the coloured edges foreground.
A_to_B.mat the transformation taking A to B, using the brain and skull images.
B_to_A.mat the transformation taking B to A, using the brain and skull images.
A_halfwayto_B.mat and B_halfwayto_A.mat the transformations taking the images to the halfway positions.
Ventricular extension - VIENA
In FSL5 a ventricular option is introduced that is invoked by the -V option
Outputs can be found in a sub-directory of the siena directory named viena
A separate html report page can be found in the viena directory called reportviena.html
- A default ventricle mask (in standard space) is supplied, but users may supply their own if they wish
- The VIENA extension is provided courtesy of the VU medical center Amsterdam, The Netherlands
SIENAX - Single-Time-Point Estimation
Usage
A default SIENAX analysis is run by typing:
sienax <input>
The input filename must not contain directory names - i.e. all must be done within the current directory.
Other options are:
-o <output-dir> : set output directory (the default output is <input>_sienax)
-d : debug (don't delete intermediate files)
-B "bet options" : if you want to change the BET defaults, put BET options inside double-quotes after using the -B flag. For example, to increase the size of brain estimation, use: -B "-f 0.3"
-2: two-class segmentation (don't segment grey and white matter separately) - use this if there is poor grey/white contrast
-t2: tell FAST that the input images are T2-weighted and not T1
-t <t>: ignore from t (mm) upwards in MNI152/Talairach space - if you need to ignore the top part of the head (e.g. if some subjects have the top missing and you need consistency across subjects)
-b <b>: ignore from b (mm) downwards in MNI152/Talairach space; b should probably be -ve
-r: tell SIENAX to estimate "regional" volumes as well as global; this produces peripheral cortex GM volume (3-class segmentation only) and ventricular CSF volume
-lm <mask>: use a lesion (or lesion+CSF) mask to remove incorrectly labelled "grey matter" voxels
-S "FAST options" : if you want to change the segmentation defaults, put FAST options inside double-quotes after using the -S flag. For example, to increase the number of segmentation iterations use: -S "-i 20"
What the script does
sienax carries out the following steps:
Run bet on the single input image, outputting the extracted brain, and the skull image. If you need to call BET with a different threshold than the default of 0.5, use -f <threshold>.
Run pairreg (which uses the brain and skull images to carry out constrained registration); the MNI152 standard brain is the target (reference), using brain and skull images derived from the MNI152. Thus, as with two-time-point atrophy, the brain is registered (this time to the standard brain), again using the skull as the scaling constraint. Thus brain tissue volume (estimated below) will be relative to a "normalised" skull size. (Ignore the "WARNING: had difficulty finding robust limits in histogram" message; this appears because FLIRT isn't too happy with the unusual histograms of skull images, but is nothing to worry about in this context.) Note that all later steps are in fact carried out on the original (but stripped) input image, not the registered input image; this is so that the original image does not need to be resampled (which introduces blurring). Instead, to make use of the normalisation described above, the brain volume (estimated by the segmentation step described below) is scaled by a scaling factor derived from the normalising transform, before being reported as the final normalised brain volume.
- A standard brain image mask, (derived from the MNI152 and slightly dilated) is transformed into the original image space (by inverting the normalising transform found above) and applied to the brain image. This helps ensure that the original brain extraction does not include artefacts such as eyeballs.
Segmentation is now run on the masked brain using fast. If there is reasonable grey-white contrast, grey matter and white matter volumes are reported separately, as well as total brain volume (this is the default behaviour). Otherwise (i.e. if sienax was called with the -2 option), just brain/CSF/background segmentation is carried out, and only brain volume is reported. Before reporting, all volumes are scaled by the normalising scaling factor, as described above, so that all subjects' volumes are reported relative to a normalised skull size.
The main files created in the SIENAX output directory are:
report.sienax the SIENAX log, including the final volume estimates.
report.html a webpage report including images showing various stages of the analysis, the final result and a description of the SIENAX method.
I_render a colour-rendered image showing the segmentation output superimposed on top of the original image.
Voxelwise SIENA Statistics
We have extended SIENA to allow the voxelwise statistical analysis of atrophy across subjects. This takes a SIENA-derived edge "flow image" (edge displacement between the timepoints) for each subject, warps these to align with a standard-space edge image and then carries out voxelwise cross-subject statistical analysis to identify brain edge points which, for example, are signficantly atrophic for the group of subjects as a whole, or where atrophy correlates significantly with age or disease progression.
In order to carry out voxelwise SIENA statistics, do the following:
- Run
siena A B
on all subjects' two-timepoints data (here A and B). - For each subject run
cd <siena_output_directory>
siena_flow2std A B
this runs flirt to generate the transform to standard space (if it doesn't already exist), takes the edge flow (atrophy) image generated by siena, dilates this several times (to "thicken" this edge flow image), transforms to standard space, and masks with a standard space edge mask. It then smooths this with a default Gaussian filter of half-width 5mm before remasking. If you want to change the smoothing then use the -s option; set the smoothing to zero to turn if off completely. All subjects will now have an edge flow image in standard edge space called A_to_B_flow_to_std. Merge these into a single 4D image; for example, if each subject's analysis has so far been carried out in a subdirectory called subject_*/A_to_B_siena, where the * could be subject ID or name, use a command such as: fslmerge -t flow_all_subjects `imglob subject_*/A_to_B_siena/A_to_B_flow_to_std*
Note: it is very important that the order that the subjects appear in this command matches the order you intend when you then create the design matrix!You are now ready to carry out the cross-subject statistics. We recommend using randomise for this, as the above steps are very unlikely to generate nice Gaussian distributions in the data. You will need to generate a FEAT-style design matrix design.mat and contrasts file design.con. The mask image that you use for randomise should be ${FSLDIR}/data/standard/MNI152_T1_2mm_edges