Contents
- Introduction
- Mac Os X
- Linux
- Windows
- Source Code
- Shell Setup
- FAQ
Availability of FSL
FSL is available ready to run for Mac OS X and Linux (Centos or Debian/Ubuntu) - with Windows computers being supported with a Linux Virtual Machine. We also provide source code if you run an OS not directly supported by us.
Installing FSL
We strongly recommend that the FSL software is downloaded and installed using our new install script available from the link below:
Once you have downloaded the installer, fslinstaller.py, you can use it to install FSL on your computer by following these instructions:
Patching FSL
After you have installed FSL for the first time, you can follow these instructions to apply upgrades/patches (e.g. from version 5.0.0 to 5.0.1):
Running FSL
Shell setup
The FSL Install script will setup your computer such that you can run the FSL tools from a terminal. See our shell setup guide for details on what this script does. On Linux computers it can also be used to configure FSL for all users on the computer.
Starting the programs
Once your account is configured for FSL use, you can run the FSL tools from the command line; the tools are stored in $FSLDIR/bin and this location will have been added to your terminal's search locations for ease of use.
In general, command-line programs are lower case (e.g. bet), with the GUI version capitalised (e.g. Bet), except on Mac OS X, where you need to append _gui because it can't tell the difference between upper and lower case filenames (e.g. Bet_gui).
To bring up a simple GUI which is just a menu of the main individual FSL GUI tools, just type fsl.
Customising
There are several options you can set to change the way FSL behaves. These options are set using environment variables, see our page on FSL environment variables for details on what can be configured, or look in the files ${FSLDIR}/etc/fslconf/fsl.sh (if you use BASH/DASH) or ${FSLDIR}/etc/fslconf/fsl.csh (if you use CSH/TCSH)(what shell am I using?).
When the shell setup commands are sourced, default settings are applied. You can override these defaults by creating a folder .fslconf in your home folder and creating a file fsl.sh (BASH users) or fsl.csh (TCSH users) within it. This file should contain new definitions for any settings you wish to change.
For example, to change the output file format of the FSL tools to NIFTI pairs, do the following:
- Create the configuration folder
mkdir ~/.fslconf cd ~/.fslconf
Create the fsl.sh (or fsl.csh) file with the FSLOUTPUTTYPE environment variable set to NIFTI_PAIR For BASH users:
For TCSH users:echo "FSLOUTPUTTYPE=NIFTI_PAIR" > fsl.sh echo "export FSLOUTPUTTYPE" >> fsl.sh
echo "setenv FSL_OUTPUTTYPE NIFTI_PAIR" > fsl.csh
Alternatively, if the fsl.sh file already exists you can edit it with the text editor of your choice.
DO NOT copy the centrally installed files into ~/.fslconf
If you copy the ${FSLDIR}/etc/fslconf/fsl.sh or ${FSLDIR}/etc/fslconf/fsl.sh into ~/.fslconf you will cause a loop that will stop you from being able to log in!
We recommend that you only change the FSL settings that differ from the defaults in your ~/.fslconf/fsl.sh (or fsl.csh) and nothing else.
If you wish to change the settings for all users you can create the file /etc/fslconf/fsl.sh (or /etc/fslconf/fsl.csh) on a machine by machine basis. Where you wish to store the settings centrally, we also check the file /usr/local/etc/fslconf/fsl.sh (or fsl.csh equivalent) so you could perhaps NFS mount this folder.
Using FSL with a GridEngine (or similar) computing cluster
Several of the more compute-intensive tools can take advantage of cluster computing, via Son of Grid Engine or http://gridscheduler.sourceforge.net/| Open Grid Scheduler]] (both forks of Sun Grid Engine). We would largely recommend using Son of Grid Engine if you are building a cluster from scratch on a Centos system as they provide RPMs to ease installation. Debian/Ubuntu users should look to install the gridengine package.
Cluster aware tools
FEAT will run multiple first-level analyses in parallel if they are setup all together in one GUI setup. At second level, if full FLAME (stages 1+2) is selected then all the slices are processed in parallel.
MELODIC will run multiple single-session analyses (or single-session preprocessing if a multi-session/subject analysis is being done) in parallel if they are setup all together in one GUI setup.
TBSS will run all registrations in parallel.
BEDPOSTX (FDT) low-level diffusion processing will run all slices in parallel.
FSLVBM will run all registrations in parallel, both at the template-creation stage and at the final registrations stage.
POSSUM will process all slices in parallel.
All the above tools interact with a compute cluster via a single central script fsl_sub; if no cluster is available then this script silently runs all the requested jobs in series. To customise FSL for your local compute cluster and clustering software, simply edit ${FSLDIR}/bin/fsl_sub - hopefully the comments in this file are sufficient to make this fairly painless, particularly for labs using a GE variant. For clustering software other than GE, note that fsl_sub makes use of a GE feature allowing the submission of a text file containing a list of commands (one per line) to be run in parallel.
Running bedpostX on a GPU or GPU cluster
We now have a CUDA implementation of bedpostX which gives 100x speedup on a single GPU compared to a single CPU core. Running the CUDA version of bedpostX requires some special settings as explained below.
- Requirements:
- Linux CentOS 6 or CentOS 6.5
- NVIDIA GPU with compute capability 2.0 or superior
- (FSL version 5.0.6 and CUDA Toolkit 5.0) or (FSL version 5.0.7/5.0.8 and CUDA Toolkit 5.5) or (FSL version 5.0.9 and CUDA Toolkit 6.5)
- SGE for multi-GPU (or optionally SGE for single-GPU)
- Running:
Without SGE (single GPU): Simply run bedpostx_gpu <dataDirectory> [options]
- With SGE:
The environment variable SGE_ROOT must be set
In addition, the variable FSLGECUDAQ must be set with the name of your GPUs queue (which may have 1 or several GPUs)
Simply run bedpostx using the usual call: bedpostx <dataDirectory> [options]
By default, bedpostx will divide the dataset into 4 parts submitted as 4 different jobs. Therefore, if there are 4 GPUs (4 slots in the CUDA queue), they can be processed in parallel. In order to use a different number of parts, use the -NJOBS option. For instance, for using 8 GPUs, run as follows: bedpostx <dataDirectory> -NJOBS 8 [options]
- Please reference the following paper when using the GPU version of bedpostx:
[Hernandez 2013] Hernandez M, Guerrero GD, Cecilia JM, Garcia JM, Inuggi A, Jbabdi S, Behrens TEJ, Sotiropoulos SN, (2013) Accelerating Fibre Orientation Estimation from Diffusion Weighted Magnetic Resonance Imaging Using GPUs. PLoS ONE 8(4):e61892