[gmx-users] Mpi version of GROMACS - Doubt about its performance

Jay Mashl mashl at uiuc.edu
Mon Aug 4 18:56:01 CEST 2003

On Mon, 4 Aug 2003, Alexandre Suman de Araujo wrote:
> I have a 4 nodes beowulf cluster and I installed de MPI version of
> GROMACS in it. To make a test I performed the same simulation in the 4
> nodes and after in only one node.
> The  total time of simulation using the 4 nodes was only 10% faster than
> using only 1 node. This is correct???? Or somebody has a better performance?
> The ethernet interface between the node is a 100Mb/s one... I think this
> is enough... or not???
> Waiting coments.


Maybe I am responding in the middle of a conversation, but in case not...

Scalability also depends on the number of atoms in your system as well as the
algorithms/parameters used to compute interactions.  Running a small system on
a lot of nodes will degrade performance because the nodes do not have enough
work to do in between communications.


More information about the gromacs.org_gmx-users mailing list