slurm and MPI example

来源:互联网 发布:mac能玩什么游戏 编辑:程序博客网 时间:2024/05/17 03:00

 

Overview¶

RCC supports these MPI implementations:

  • IntelMPI
  • MVAPICH2
  • OpenMPI

Each MPI implementation usually has a module available for use with GCC, the Intel Compiler Suite, and PGI. For example, at the time of this writing these MPI modules were available:

openmpi/1.6(default)openmpi/1.6+intel-12.1openmpi/1.6+pgi-2012mvapich2/1.8(default)mvapich2/1.8+intel-12.1mvapich2/1.8+pgi-2012mvapich2/1.8-gpudirectmvapich2/1.8-gpudirect+intel-12.1intelmpi/4.0intelmpi/4.0+intel-12.1(default)

MPI Implementation Notes¶

The different MPI implementations have different options and features. Any notable differences are noted here.

IntelMPI¶

IntelMPI uses an environment variable to affect the network communication fabric it uses:

I_MPI_FABRICS

During job launch the Slurm TaskProlog detects the network hardware and sets this variable approately. This will typically be set toshm:ofa, which makes IntelMPI use shared memory communication followed by ibverbs. If a job is run on a node without Infiniband this will be set toshm which uses shared memory only and limits IntelMPI to a single node job. This is usually what is wanted on nodes without a high speed interconnect. This variable can be overridden if desired in the submission script.

MVAPICH2¶

MVAPICH2 is compiled with the OFA-IB-CH3 interface. There is no support for running programs compiled with MVAPICH2 on loosely coupled nodes.

GPUDirect builds of MVAPICH2 with CUDA enabled are available for use on the GPU nodes. These builds are otherwise identical to the standard MVAPICH2 build.

OpenMPI¶

Nothing at this time.

Example¶

Let’s look at an example MPI hello world program and explain the steps needed to compile and submit it to the queue. An example MPI hello world program:hello-mpi.c

#include <stdio.h>#include <stdlib.h>#include <mpi.h>int main(int argc, char *argv[], char *envp[]) {  int numprocs, rank, namelen;  char processor_name[MPI_MAX_PROCESSOR_NAME];  MPI_Init(&argc, &argv);  MPI_Comm_size(MPI_COMM_WORLD, &numprocs);  MPI_Comm_rank(MPI_COMM_WORLD, &rank);  MPI_Get_processor_name(processor_name, &namelen);  printf("Process %d on %s out of %d\n", rank, processor_name, numprocs);  MPI_Finalize();}

Place hello-mpi.c in your home directory. Compile and execute this program interactively by entering the following commands into the terminal:

module load openmpimpicc hello-mpi.c -o hello-mpi

In this case we are using the default version of the openmpi module which defaults to the GCC compiler.  It should be possible to use any available MPI/compiler for this example.

hello-mpi.sbatch is a submission script that can be used to submit a job to the queue to run this program.

#!/bin/bash# set the job name to hello-mpi#SBATCH --job-name=hello-mpi# send output to hello-mpi.out#SBATCH --output=hello-mpi.out# this job requests 2 nodes#SBATCH --nodes=2# this job requests exclusive access to the nodes it is given# this mean it will be the only job running on the node#SBATCH --exclusive# --constraint=ib must be give to guarantee a job is allocated # nodes with Infiniband#SBATCH --constraint=ib# load the openmpi modulemodule load openmpi# Run the process with mpirun. Notice -n is not required. mpirun will# automatically figure out how many processes to run from the slurm optionsmpirun ./hello-mpi

The inline comments describe what each line does, but is important to point out 3 things that almost all MPI jobs have in common:

  • --constraint=ib is given to guarantee a node with Infiniband is allocated
  • --exclusive is given to guarantee this job will be the only job on the node
  • mpirun does not need to be given -n. All supported MPI environments automatically determine the proper layout based on the slurm options

You can submit this job with this command:

sbatch hello-mpi.sbatch

Here is example output of this program:

Process 4 on midway123 out of 32Process 0 on midway123 out of 32Process 1 on midway123 out of 32Process 2 on midway123 out of 32Process 5 on midway123 out of 32Process 15 on midway123 out of 32Process 12 on midway123 out of 32Process 7 on midway123 out of 32Process 9 on midway123 out of 32Process 14 on midway123 out of 32Process 8 on midway123 out of 32Process 24 on midway124 out of 32Process 10 on midway123 out of 32Process 11 on midway123 out of 32Process 3 on midway123 out of 32Process 6 on midway123 out of 32Process 13 on midway123 out of 32Process 17 on midway124 out of 32Process 20 on midway124 out of 32Process 19 on midway124 out of 32Process 25 on midway124 out of 32Process 27 on midway124 out of 32Process 26 on midway124 out of 32Process 29 on midway124 out of 32Process 28 on midway124 out of 32Process 31 on midway124 out of 32Process 30 on midway124 out of 32Process 18 on midway124 out of 32Process 22 on midway124 out of 32Process 21 on midway124 out of 32Process 23 on midway124 out of 32Process 16 on midway124 out of 32

It is possible to affect the number of tasks run per node with the--ntasks-per-node option. Submitting the job like this:

sbatch --ntasks-per-node=1 hello-mpi.sbatch

Results in output like this:

Process 0 on midway123 out of 2Process 1 on midway124 out of 2

Advanced Usage¶

Both OpenMPI and IntelMPI have the possibility to launch MPI programs directly with the Slurm commandsrun. It is not necessary to use this mode for most jobs, but it may allow job launch options that would not otherwise be possible. For example, on a login node it is possible to launch the above hello-mpi command using OpenMPI directly on a compute node with this command:

srun --constraint=ib -n16 --exclusive hello-mpi

For IntelMPI, it is necessary to set an environment variable for this to work:

export I_MPI_PMI_LIBRARY=/software/slurm-current-$DISTARCH/lib/libpmi.sosrun --constraint=ib -n16 --exclusive hello-mpi
0 0
原创粉丝点击