2009-03-04

2009-03-04 Wednesday - Open / Message Passing Interface (MPI)

This evening, while doing some research on SOA and Cloud Computing technologies, I came across this article on the Center for High Performance Computing, University of Utah: How to compile and run a trivial MPI program through the CHPC Batch system

Open MPI
"Open MPI is a project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI) with the stated aim of building the best Message Passing Interface (MPI) library available. It is used by many TOP500 supercomputers including Roadrunner, which is as of 2008 the world's fastest supercomputer."

Open MPI represents the merger between three well-known MPI implementations:

- FT-MPI from the University of Tennessee
- LA-MPI from Los Alamos National Laboratory
- LAM/MPI from Indiana University

with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the Open MPI development team.


Message Passing Interface (MPI):
"...a specification for an API that allows many computers to communicate with one another. It is used in computer clusters and supercomputers. MPI was created by William Gropp and Ewing Lusk and others."
(...)
"MPI is a language-independent communications protocol used to program parallel computers. Both point-to-point and collective communication are supported. MPI "is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation."[1] MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used in high-performance computing today."



The Los Alamos Message Passing Interface
("LA-MPI is no longer in active development, but is being maintained for use on production systems at LANL...future development is focused on the Open MPI project, a new component-based, extensible implementation of MPI-2.")


Open MPI: Open Source High Performance Computing

MPI.NET: High-Performance C# Library for Message Passing

Intel® MPI Library 3.2 for Linux or Windows
"Implementing the high performance MPI-2 specification on multiple
fabrics, Intel® MPI Library 3.2 focuses on making applications perform better on IA based clusters. Intel MPI Library enables you to quickly deliver maximum end user performance even if you change or upgrade to new interconnects, without requiring major changes to the software or to the operating environment. Intel also provides a free runtime environment kit for products developed with the Intel MPI library."


On-Demand MPI Cluster with Python and EC2 (part 1 of 3)

MPI Cluster with Python and Amazon EC2 (part 2 of 3)

Data Wrangling Image: Fedora Core 6 MPI Compute Node with Python Libraries





MPI-HMMER is an open source MPI implementation of the HMMER protein sequence analysis suite. The main search algorithms, hmmpfam and hmmsearch, have been ported to MPI in order to provide high throughput HMMER searches on modern computational clusters.

Microsoft MPI





Older MPI References

Message Passing Interface Forum
MPI Documents

LAM/MPI Parallel Computing

IBM MPI Programming Guide
IBM MPI Subroutine Reference
IBM Redbooks - RS/6000 SP: Practical MPI Programming

MPI.NET Software

MPI Tutorials

Ohio Supercomputer Center Introduction to Parallel Computing with MPI

Stanford Linear Accelerator Center (SLAC) MPI Tutorial

MPI FORTRAN90 Examples

The Message Passing Interface (MPI) standard

MPICH-A Portable Implementation of MPI

Blaise Barney, Lawrence Livermore National Laboratory: Message Passing Interface (MPI)

SP Parallel Programming Workshop

HARNESS Fault Tolerant MPI

Parallel Programming with MPI by Peter Pacheco

Message Passing Interface (MPI) FAQ

HP Message Passing Interface library (HP-MPI)

National Energy Research Scientific Computing Center, Introduction to MPI
(A DOE Office of Science User Facility at Lawrence Berkeley National Laboratory)

Interoperable MPI, National Institute of Standards and Technology

Internet Parallel Computing Archive > Parallel > Standards > mpi

MPI-FM: Message Passing Interface on Fast Messages
"MPI-FM is a high-performance cluster implementation of the Message Passing Interface (MPI) based on a port of MPICH to Fast Messages. The Message Passing Interface is an industry standard communication interface for message-passing parallel programs. It provides a wealth of capabilities including synchronous and asynchronous messaging, datatypes, and communicators. MPI-FM is a complete implementation of the MPI standard 1.0 based on the Argonne/MSU MPICH code base. However, the MPICH code base was tuned significantly to avoid buffer copies and reduce the critical path length for message reception. The effective software overhead for the MPI send/receive is below 3 microseconds in MPI-FM."


MPI-CHECK (FORTRAN)

P2P-MPI

Condor Version 6.6.11 Manual, Condor Team, University of Wisconsin-Madison




2009-04-24 Friday Update:

Heidi Poxon, Technical Lead, Performance Tools, Cray Inc.
Craypat OpenMP and MPI Metrics

3 comments:

Dinesh Agarwal said...

Hi,
I am working with MPI and recently our department bought a cluster with multiple multicore chips on it. I want to know which processor is being utilized by the processes I created. MPI_Get_processor_name returns the same name for all processors. I am not sure if there is any such API. Please let me know if you have an idea on how can this be accomplished.

Kelvin D. Meeks said...

Dinesh:

I believe you want to examine the Rank attribute:

Two examples of the synatx for retrieving the value:

MPI_Comm_rank MPI_COMM_WORLD,&rank);

(see slide #18: http://www.cs.earlham.edu/~lemanal/slides/mpi-slides.pdf)

__or__

int rank = MPI::COMM_WORLD.Get_rank();

(see slide #6: http://www-meg.phys.cmu.edu/~bellis/Notes/IntroToMPItalk.pdf)

Do you have a tool like Marmot available?

MPI Application Development Using the Analysis Tool MARMOT
http://www.springerlink.com/content/5m5kqxa8ux7m0tk3/

Dinesh Agarwal said...

Kelvin,
Thanks for the prompt reply. AFAIK rank returns the id of the process and not the processor. What I am looking for is a way to monitor the task allocation i.e. which process got assigned to which processor. Even if I have an analysis tool it must use some MPI call (I believe) to find out which processor is executing the process with id(rank) say x. Moreover, I am trying to find how do MPI assign ids to different cores of a processor. I hope I only made the earlier question simpler than confuse you.

Copyright

© 2001-2021 International Technology Ventures, Inc., All Rights Reserved.