Using MPI in Parallel Programming

By Juha Haataja


This article was originally published in the CSC News magazine (Vol. 8, No 1, March 1996).

What is MPI

MPI (Message-Passing Interface) is a portable standard for programming parallel computers. MPI uses the message-passing paradigm which is well suited for computing on distributed-memory machines. Of course, message-passing can also be used on shared-memory multiprocessors.

Compared to other message-passing systems MPI is more like a superset than a subset. Because of this porting programs to MPI from other message-passing systems is usually relatively straightforward. MPI is also easy to start using for the beginner, and the more complex features are not needed in simple and straightforward parallel programs. MPI is also designed to make efficient implementations possible, so a program using MPI on a certain system should also run relatively fast also on another system.

About message passing

Message-passing is not a difficult programming concept to learn, because we humans do message-passing all the time. Reasons for needing message-passing are twofold: we need to exchange data between the parallel tasks, and we may also need to synchronize the tasks. If the parallel tasks are completely independent, no message-passing or control of parallelization is necessary. In this case it is of course misleading to talk about parallel programming.

When doing parallel programming for large applications the message-passing programming model is often fairly difficult to handle. Programming dozens or even hundreds of parallel tasks is the project planner's nightmare. In human projects it is possible to discuss the status of the different tasks in common meetings, but on a computer all the planning is done beforehand by the programmer, and the debugging of a parallel program is not a favourite job for anyone.

If message-passing is not the easiest way to program, why use it? There are other ways to program parallel computers, for example the data-parallel High-Performance Fortran (HPF) programming language. In data-parallel languages no message-passing programming is usually needed. Unfortunately, data-parallel programming can only be used in a small subset of algorithms. In contrast, the advantage of message-passing is the generality of the model - message-passing can be used to program almost any algorithm you have, and it applies to almost all kinds of computer systems.

Why use MPI

As mentioned in the beginning, MPI is rather like an evolutionary combination of the previous message-passing systems. For the programmer this means a lot of functionality and a relatively high-level programming model.

Because the MPI standard is an open standard, anyone can make an implementation of MPI available. Due to this there are currently several public domain implementations you can choose from, and many supercomputer vendors are offering their own fast versions of MPI.

Because of the wide availability of MPI the portability of a parallel program should not be a problem. Also, due to the standardization of MPI, the programmer doesn't have to care which version of the message-passing system is used, as is the case when using the somewhat older PVM system (Parallel Virtual Machine).

One should note, however, that MPI is not a parallel programming environment, and some aspects of parallel computing are not included in the standard. Therefore MPI does not include facilities for dynamic process control, parallel I/O, and starting of the parallel tasks. Because these features are currently outside the scope of the standard, each implementation of MPI offers its own parallel programming environment.

Special features of MPI

MPI offers a rich set of functions for doing point-to-point communication. In addition, you can easily divide your tasks into groups for doing parts of the computation. There are also facilities for doing many-to-one and many-to-many communication, such as finding the maximum of a set of values, or voting what action should be done next. Also, the programmer can arrange the tasks in a virtual topology, so that for example the nearest-neighbour communication in a square grid is very straightforward to program.

Inside the MPI implementation are features the beginning programmer certainly appreciates. First of all, MPI quarantees the reliability of message transmissions. Also, messages sent from a particular task to another task arrive in the order they were sent - so the programmer doesn't have to do some tedious bookkeeping and checking.

Perhaps the most advanced aspect of MPI is the concept of "communicators". These make it possible to isolate communication so that only those tasks which should take part in the message-passing can do so. This makes it possible to write applications and subroutine libraries whose communication is invisible to the rest of the world. Therefore, the message-passing done by some parts of a software package do not get mixed up with the message-passing done by other parts. This is neccessary to implement for example a parallel solver for linear equations, which could be called by a program which uses MPI for doing something else in parallel.

One further aspect of MPI is extensibility. It it possible to extend MPI for example by adding an application-specific layer of computational primitives on top of MPI. Also, debugging, profiling, and visualization of parallel programs is possible using the capabilities built into the MPI standard. See the accompanying figure for an example of this.

How to program with MPI

The most important part of parallel programming is usually finding an algorithm which parallelizes, not the selection of a parallelization tool. Often the most efficient serial algorithm is not the best for parallel computation.

Only after finding a scalable parallel algorithm should one think about the coding of the program. If the message-passing programming model seems to suit the application, then MPI is a good choice for implementing the parallelism. The easiest way to start MPI programming is to begin with examples. There are already quite useful collections of MPI example programs available on the Internet. Also, the MPI implementation available to you probably has a set of example and test programs which can be used as templates for your own programs.

CSC currently offers the MPICH libraries on the Cactus (IBM SP) system, which has 16 processors dedicated to parallel applications. In the summer of 1996 a Cray T3E parallel computer will be installed at CSC, and a high-perfomance version of MPI will be available for doing serious parallel computing. Meanwhile the Cactus system can be used as a development platform and for testing MPI applications.

Sources for more information

CSC has published in Finnish a guidebook on parallel programming using MPI.

The MPI standard is available on the Internet, as are many tutorials and example programs. Use the search services (e.g., Google) available on the Web.

You can also start with books describing MPI. Here are some examples:

William Gropp, Ewing Lusk, and Anthony Skjellum, "USING MPI: Portable Parallel Programming with the Message-Passing Interface" (MIT Press, 1994).

Ian Foster, "Designing and Building Parallel Programs" (Addison-Wesley, 1995, URL http://www.mcs.anl.gov/dbpp).

Figure

Message passing visualized using the Upshot program. The visualization shows that the MPI program has a serious bottleneck in communication.

[Upshot figure]