This section describes instructions and simple scripts we've developed to launch batch scheduled jobs using Grid Engine on Rocks clusters.
Jobs are submitted to Grid Engine via scripts. Here is an example of a Grid Engine script, sge-qsub-test.sh that we use to test Grid Engine. It asks Grid Engine to launch the MPI job on two processors (line 5: #$ -pe mpi 2). The script then sets up a temporary ssh key that is used by mpirun to instantiate the program (in this case, the program is xhpl).
You can submit the job to Grid Engine by executing:
qsub sge-qsub-test.sh |
After the job is launched, you can query the status of the queue by running:
qstat -f |
Grid Engine puts the output of job into 4 files. The 2 files that are most relevant are:
$HOME/sge-qsub-test.sh.o<job id> |
$HOME/sge-qsub-test.sh.e<job id> |
The other 2 files pertain to Grid Engine status and they are named:
$HOME/sge-qsub-test.sh.po<job id> |
$HOME/sge-qsub-test.sh.pe<job id> |
SGE, the default Batch System on Rocks clusters, will allocate you a set of nodes to run your parallel job. It will not, however, launch them for you. Instead SGE sets a variable called $PE_HOSTFILE that names a file with a set of nodes listed within. In the mpi parallel environment, a special start script parses this file and starts the mpirun launcher. However, if you need to start a non-MPI job via SGE, cluster-fork can help. (See Section 2.1.2 for details on cluster-fork).
Cluster-fork can interpret the PE_HOSTFILE given by SGE. The --pe-hostfile option is used for this purpose. For example, to start the 'hostname' command on all nodes allocated by SGE:
/opt/rocks/bin/cluster-fork --bg --pe-hostfile $PE_HOSTFILE hostname |