The mpirun(1) command is the primary job launcher for the SGI implementation of MPI. The mpirun command must be used whenever a user wishes to run an MPI application on an IRIX or a Linux system. You can run an application on the local host only (the host from which you issued mpirun) or distribute it to run on any number of hosts that you specify. Note that several MPI implementations available today use a job launcher called mpirun and, because this command is not part of the MPI standard, each implementation's mpirun command differs in both syntax and functionality.
The format of the mpirun command is as follows:
mpirun [global_options ]entry[: entry ... ] |
The global_options operand applies to all MPI executable files on all specified hosts. The following global options are supported:
Option | Description | ||||||||||
-a[rray] array_name | (IRIX only) Specifies the array to use when launching an MPI application. By default, Array Services uses the default array specified in the Array Services configuration file, arrayd.conf. | ||||||||||
-cpr | ( IRIX systems only.) Allows users to checkpoint or restart MPI jobs that consist of a single executable file running on a single system. The absence of any host names in the mpirun command indicates that a job is running on a single system. For example, the following command is valid:
The following commands are not valid:
The first one is not valid because it consists of more than one executable file (a.out and b.out). The second one is not valid because even if submitted from hosta, it specifies a host name. For interactive users, the preferred method of checkpointing the job is by ASH. This ensures that all of the user's processes specified in the mpirun command, plus daemons associated with the job, will be checkpointed. You can use the array(1) command to find the ASH of a job. Interactive users should also note that stdin, stdout, and stderr should not be connected to the terminal when this option is being used. Use of this option requires Array Services 3.1 or later. The default behavior will allow for jobs to be checkpointed if the above rules for invoking have been followed, but using the -cpr option is recommended because it provides specific error messages instead of silently disabling. | ||||||||||
-d[ir] path_name | Specifies the working directory for all hosts. In addition to normal path names, the following special values are recognized: | ||||||||||
-f[ile] file_name | Specifies a text file that contains mpirun arguments. | ||||||||||
-h[elp] | Displays a list of options supported by the mpirun command. | ||||||||||
-p[refix] prefix_string | Specifies a string to prepend to each line of output from stderr and stdout for each MPI process. To delimit lines of text that come from different hosts, output to stdout must be terminated with a new line character. If a process's stdout or stderr streams do not end with a new line character, there will be no prefix associated with the output or error streams of that process from the final new line to the end of the stream. If the MPI_UNBUFFERED_STDIO environment variable is set, the prefix string is ignored.
Some strings have special meaning and are translated as follows:
For examples of the use of these strings, first consider the following code fragment:
Depending on how this code is run, the results of running the mpirun command will be similar to those in the following examples:
| ||||||||||
-upu_size | Specifies the value of the MPI_UNIVERSE_SIZE attribute to be used in supporting MPI_Comm_spawn and MPI_Comm_spawn_multiple. This field must be set if either of these functions are to be used by the application being launched by mpirun. Setting this field implies the MPI job is being run in spawn capable mode. | ||||||||||
-v[erbose] | Displays comments on what mpirun is doing when launching the MPI application. |
The entry operand describes a host on which to run a program, and the local options for that host. You can list any number of entries on the mpirun command line.
In the common case (same program, multiple data (SPMD)), in which the same program runs with identical arguments on each host, usually, you need to specify only one entry.
Each entry has the following components:
One or more host names (not needed if you run on the local host)
Number of processes to start on each host
Name of an executable program
Arguments to the executable program (optional)
An entry has the following format:
host_list local_options program program_arguments
The host_list operand is either a single host (machine name) or a comma-separated list of hosts on which to run an MPI program.
The local_options operand contains information that applies to a specific host list. The following local options are supported:
Option | Description | |
-f[ile] file_name | Specifies a text file that contains mpirun arguments (same as global_options.) For more details, see “Using a File for mpirun Arguments”. | |
-np num_proc | Specifies the number of processes on which to run. | |
-nt num_tasks | This option behaves the same as -np. |
The program program_arguments operand specifies the name of the program that you are running and its accompanying options.
Because the full specification of a complex job can be lengthy, you can enter mpirun arguments in a file and use the -f option to specify the file on the mpirun command line, as in the following example:
mpirun -f my_arguments |
The arguments file is a text file that contains argument segments. White space is ignored in the arguments file, so you can include spaces and newline characters for readability. An arguments file can also contain additional -f options.
For testing and debugging, it is often useful to run an MPI program only on the local host without distributing it to other systems. To run the application locally, enter mpirun with the -np argument. Your entry must include the number of processes to run and the name of the MPI executable file.
The following command starts three instances of the application mtest, to which is passed an arguments list (arguments are optional).
mpirun -np 3 mtest 1000 "arg2" |
You are not required to use a different host in each entry that you specify on the mpirun(1) command. You can launch a job that has two executable files on the same host. In the following example, both executable files use shared memory:
mpirun host_a -np 6 a.out : host_a -np 4 b.out |
You can use mpirun(1) to launch a program that consists of any number of executable files and processes and distribute it to any number of hosts. A host is usually a single machine, or, for IRIX systems, can be any accessible computer running Array Services software. For available nodes on systems running Array Services software, see the /usr/lib/array/arrayd.conf file. Array Services is not supported on Linux systems currently, so an alternate launching mechanism is used.
You can list multiple entries on the mpirun command line. Each entry contains an MPI executable file and a combination of hosts and process counts for running it. This gives you the ability to start different executable files on the same or different hosts as part of the same MPI application.
The following examples show various ways to launch an application that consists of multiple MPI executable files on multiple hosts.
The following example runs ten instances of the a.out file on host_a:
mpirun host_a -np 10 a.out |
When specifying multiple hosts, you can omit the -np or -nt option, listing the number of processes directly. The following example launches ten instances of fred on three hosts. fred has two input arguments.
mpirun host_a, host_b, host_c 10 fred arg1 arg2 |
The following example launches an MPI application on different hosts with different numbers of processes and executable files, using an array called test:
mpirun -array test host_a 6 a.out : host_b 26 b.out |
The following example launches an MPI application on different hosts out of the same directory on both hosts:
mpirun -d /tmp/mydir host_a 6 a.out : host_b 26 b.out |
To use the MPI-2 process creation functions MPI_Comm_spawn or MPI_Comm_spawn_multiple, the user must specify the universe size by specifying the -up option on the mpirun command line.
For example, the following command starts three instances of the MPI application mtest in a universe of size 10:
mpirun -up 10 -np 3 mtest |
Up to 7 more MPI processes can be started by mtest via one of the spawn commands.
Note: This implementation does not support spawn capable mode for MPI jobs launched via certain batch schedulers, nor does it support a spawn capability for MPI jobs spanning multiple hosts. |