"Open MPI is a project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI) with the stated aim of building the best Message Passing Interface (MPI) library available. It is used by many TOP500 supercomputers including Roadrunner, which is as of 2008 the world's fastest supercomputer."
Open MPI represents the merger between three well-known MPI implementations:
- FT-MPI from the University of Tennessee
- LA-MPI from Los Alamos National Laboratory
- LAM/MPI from Indiana University
with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the Open MPI development team.
Message Passing Interface (MPI):
"...a specification for an API that allows many computers to communicate with one another. It is used in computer clusters and supercomputers. MPI was created by William Gropp and Ewing Lusk and others."
"MPI is a language-independent communications protocol used to program parallel computers. Both point-to-point and collective communication are supported. MPI "is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation." MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used in high-performance computing today."
The Los Alamos Message Passing Interface
("LA-MPI is no longer in active development, but is being maintained for use on production systems at LANL...future development is focused on the Open MPI project, a new component-based, extensible implementation of MPI-2.")
Open MPI: Open Source High Performance Computing
MPI.NET: High-Performance C# Library for Message Passing
Intel® MPI Library 3.2 for Linux or Windows
"Implementing the high performance MPI-2 specification on multiple
fabrics, Intel® MPI Library 3.2 focuses on making applications perform better on IA based clusters. Intel MPI Library enables you to quickly deliver maximum end user performance even if you change or upgrade to new interconnects, without requiring major changes to the software or to the operating environment. Intel also provides a free runtime environment kit for products developed with the Intel MPI library."
On-Demand MPI Cluster with Python and EC2 (part 1 of 3)
MPI Cluster with Python and Amazon EC2 (part 2 of 3)
Data Wrangling Image: Fedora Core 6 MPI Compute Node with Python Libraries
MPI-HMMER is an open source MPI implementation of the HMMER protein sequence analysis suite. The main search algorithms, hmmpfam and hmmsearch, have been ported to MPI in order to provide high throughput HMMER searches on modern computational clusters.
Older MPI References
Message Passing Interface Forum
LAM/MPI Parallel Computing
IBM MPI Programming Guide
IBM MPI Subroutine Reference
IBM Redbooks - RS/6000 SP: Practical MPI Programming
Ohio Supercomputer Center Introduction to Parallel Computing with MPI
Stanford Linear Accelerator Center (SLAC) MPI Tutorial
MPI FORTRAN90 Examples
The Message Passing Interface (MPI) standard
MPICH-A Portable Implementation of MPI
Blaise Barney, Lawrence Livermore National Laboratory: Message Passing Interface (MPI)
SP Parallel Programming Workshop
HARNESS Fault Tolerant MPI
Parallel Programming with MPI by Peter Pacheco
Message Passing Interface (MPI) FAQ
HP Message Passing Interface library (HP-MPI)
National Energy Research Scientific Computing Center, Introduction to MPI
(A DOE Office of Science User Facility at Lawrence Berkeley National Laboratory)
Interoperable MPI, National Institute of Standards and Technology
Internet Parallel Computing Archive > Parallel > Standards > mpi
MPI-FM: Message Passing Interface on Fast Messages
"MPI-FM is a high-performance cluster implementation of the Message Passing Interface (MPI) based on a port of MPICH to Fast Messages. The Message Passing Interface is an industry standard communication interface for message-passing parallel programs. It provides a wealth of capabilities including synchronous and asynchronous messaging, datatypes, and communicators. MPI-FM is a complete implementation of the MPI standard 1.0 based on the Argonne/MSU MPICH code base. However, the MPICH code base was tuned significantly to avoid buffer copies and reduce the critical path length for message reception. The effective software overhead for the MPI send/receive is below 3 microseconds in MPI-FM."
Condor Version 6.6.11 Manual, Condor Team, University of Wisconsin-Madison
2009-04-24 Friday Update:
Heidi Poxon, Technical Lead, Performance Tools, Cray Inc.
Craypat OpenMP and MPI Metrics