[gmx-users] grompp -np option
Nathan Moore
nmoore at physics.umn.edu
Fri Jun 10 18:31:44 CEST 2005
I certainly understand your motivation in optimizing the code for
cheap-to-build clusters. I'm presently assessing the code's performance
on Blue Gene for IBM. On this architecture, an application must scale
decently to at least 512 processors (the minimum number one can request in
a production system).
Thanks for all your help so far!
Nathan
>
>
>>From: "Nathan Moore" <nmoore at physics.umn.edu>
>>Reply-To: Discussion list for GROMACS users <gmx-users at gromacs.org>
>>To: "Discussion list for GROMACS users" <gmx-users at gromacs.org>
>
>>
>>Can you expond on your ring structure? Does this mean that if a message
>>is sent from rank=1 to rank=3 the message must go through rank 2 (ie
>>mpi_send/recv(1->2) then mpi_send/recv(2->3)? I was under the impression
>>that efficient MPI routing is generally regulated to the vendor MPI
>>implementation.
>
> The ring communication is described in the manual.
> The ring structure stems from the original specials purpose Gromacs
> machine
> (which is what the mac in Groningen stands for).
> Some years ago I have tested mpi all to all and sum calls on an cray/sgi
> shared memory machine. Surprisingly it turned out that these were
> significantly
> slower that the Gromacs ring structure, so we decided to keep the ring.
> In the current CVS code there are some all to all calls in the PME code.
> I haven't checked recently what the performance difference is.
> But Gromacs is now mostly run on pc clusters and there I expect the ring
> communication to still be pretty efficient, especially for the small
> number
> of nodes which is usually used MD simulations.
>
> Berk.
>
> _________________________________________________________________
> Talk with your online friends with MSN Messenger http://messenger.msn.nl/
>
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
>
More information about the gromacs.org_gmx-users
mailing list