[gmx-users] grompp -np option
Berk Hess
gmx3 at hotmail.com
Fri Jun 10 18:17:11 CEST 2005
>From: "Nathan Moore" <nmoore at physics.umn.edu>
>Reply-To: Discussion list for GROMACS users <gmx-users at gromacs.org>
>To: "Discussion list for GROMACS users" <gmx-users at gromacs.org>
>
>Can you expond on your ring structure? Does this mean that if a message
>is sent from rank=1 to rank=3 the message must go through rank 2 (ie
>mpi_send/recv(1->2) then mpi_send/recv(2->3)? I was under the impression
>that efficient MPI routing is generally regulated to the vendor MPI
>implementation.
The ring communication is described in the manual.
The ring structure stems from the original specials purpose Gromacs machine
(which is what the mac in Groningen stands for).
Some years ago I have tested mpi all to all and sum calls on an cray/sgi
shared memory machine. Surprisingly it turned out that these were
significantly
slower that the Gromacs ring structure, so we decided to keep the ring.
In the current CVS code there are some all to all calls in the PME code.
I haven't checked recently what the performance difference is.
But Gromacs is now mostly run on pc clusters and there I expect the ring
communication to still be pretty efficient, especially for the small number
of nodes which is usually used MD simulations.
Berk.
_________________________________________________________________
Talk with your online friends with MSN Messenger http://messenger.msn.nl/
More information about the gromacs.org_gmx-users
mailing list