Message Passing Interface - Example Program

Example Program

Here is a "Hello World" program in MPI written in C. In this example, we send a "hello" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

/* "Hello World" MPI Test Program */ #include #include #include #define BUFSIZE 128 #define TAG 0 int main(int argc, char *argv) { char idstr; char buff; int numprocs; int myid; int i; MPI_Status stat; /* MPI programs start with MPI_Init; all 'N' processes exist thereafter */ MPI_Init(&argc,&argv); /* find out how big the SPMD world is */ MPI_Comm_size(MPI_COMM_WORLD,&numprocs); /* and this processes' rank is */ MPI_Comm_rank(MPI_COMM_WORLD,&myid); /* At this point, all programs are running equivalently, the rank distinguishes the roles of the programs in the SPMD model, with rank 0 often used specially... */ if(myid == 0) { printf("%d: We have %d processors\n", myid, numprocs); for(i=1;iWhen run with two processors this gives the following output.

0: We have 2 processors 0: Hello 1! Processor 1 reporting for duty

The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behavior to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

Read more about this topic:  Message Passing Interface

Famous quotes containing the word program:

    In 1869 he started his work for temperance instigated by three drunken men who came to his home with a paper signed by a saloonkeeper and his patrons on which was written “For God’s sake organize a temperance society.”
    —Federal Writers’ Project Of The Wor, U.S. public relief program (1935-1943)

    It is said that a carpenter building a summer hotel here ... declared that one very clear day he picked out a ship coming into Portland Harbor and could distinctly see that its cargo was West Indian rum. A county historian avers that it was probably an optical delusion, the result of looking so often through a glass in common use in those days.
    —For the State of New Hampshire, U.S. public relief program (1935-1943)