[gmx-users] Mpi version of GROMACS - Doubt about its performance
spoel at xray.bmc.uu.se
Mon Aug 4 19:03:00 CEST 2003
On Mon, 2003-08-04 at 18:54, Jay Mashl wrote:
> On Mon, 4 Aug 2003, Alexandre Suman de Araujo wrote:
> > I have a 4 nodes beowulf cluster and I installed de MPI version of
> > GROMACS in it. To make a test I performed the same simulation in the 4
> > nodes and after in only one node.
> > The total time of simulation using the 4 nodes was only 10% faster than
> > using only 1 node. This is correct???? Or somebody has a better performance?
> > The ethernet interface between the node is a 100Mb/s one... I think this
> > is enough... or not???
> > Waiting coments.
> Maybe I am responding in the middle of a conversation, but in case not...
> Scalability also depends on the number of atoms in your system as well as the
> algorithms/parameters used to compute interactions. Running a small system on
> a lot of nodes will degrade performance because the nodes do not have enough
> work to do in between communications.
This is correct. PME scales poorly. You could try the gromacs
benchmarks, in particular the DPPC one.
> gmx-users mailing list
> gmx-users at gromacs.org
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
Dr. David van der Spoel, Dept. of Cell and Molecular Biology
Husargatan 3, Box 596, 75124 Uppsala, Sweden
phone: 46 18 471 4205 fax: 46 18 511 755
spoel at xray.bmc.uu.se spoel at gromacs.org http://xray.bmc.uu.se/~spoel
More information about the gromacs.org_gmx-users