[gmx-users] Mpi version of GROMACS - Doubt about its performance

Shang-Te Danny Hsu hsu at panda.chem.uu.nl
Mon Aug 4 16:49:00 CEST 2003


Dear Alexandre,

We also noticed the same problem with our SuSE8.1 Linux cluster with 
mpich-1.2.1 and 1Gb/s ethernet interface. It only used less then 5% CPU 
that was available when the parallel jobs were distributed 
automatically. So far we found two temporary solutions:

1. Resubmit your parallel jobs repeatedly until it runs properly 
(normally the third time it becomes okay)

2. Manually define the machines you'd like to distribute your jobs to.

We are still testing other possibilities

Cheers,
Danny

Alexandre Suman de Araujo wrote:
> Hi GMXer
> 
> I have a 4 nodes beowulf cluster and I installed de MPI version of 
> GROMACS in it. To make a test I performed the same simulation in the 4 
> nodes and after in only one node.
> The  total time of simulation using the 4 nodes was only 10% faster than 
> using only 1 node. This is correct???? Or somebody has a better 
> performance?
> The ethernet interface between the node is a 100Mb/s one... I think this 
> is enough... or not???
> Waiting coments.
> 
> Best Regards.
> 


-- 
Shang-Te Danny HSU
Department of NMR Spectroscopy
Bijvoet Center for Biomolecular Research
Utrecht University
Padualaan 8, 3584 CH Utrecht, the Netherlands
phone: +31-30-2539931 | fax: +31-30-2537623
e-mail: hsu at nmr.chem.uu.nl




More information about the gromacs.org_gmx-users mailing list