[gmx-users] Mpi version of GROMACS - Doubt about its performance

Alexandre Suman de Araujo asaraujo at if.sc.usp.br
Mon Aug 4 17:26:02 CEST 2003

Shang-Te Danny Hsu wrote:

> Dear Alexandre,
> We also noticed the same problem with our SuSE8.1 Linux cluster with 
> mpich-1.2.1 and 1Gb/s ethernet interface. It only used less then 5% 
> CPU that was available when the parallel jobs were distributed 
> automatically. So far we found two temporary solutions:
> 1. Resubmit your parallel jobs repeatedly until it runs properly 
> (normally the third time it becomes okay)
> 2. Manually define the machines you'd like to distribute your jobs to.
> We are still testing other possibilities
> Cheers,
> Danny
> Alexandre Suman de Araujo wrote:
>> Hi GMXer
>> I have a 4 nodes beowulf cluster and I installed de MPI version of 
>> GROMACS in it. To make a test I performed the same simulation in the 
>> 4 nodes and after in only one node.
>> The total time of simulation using the 4 nodes was only 10% faster 
>> than using only 1 node. This is correct???? Or somebody has a better 
>> performance?
>> The ethernet interface between the node is a 100Mb/s one... I think 
>> this is enough... or not???
>> Waiting coments.
>> Best Regards.
I'm using Red Hat 9.0 with LAM 7.0. . What do you mean with resubmit 
jobs??? Stop de mdrun(with a ctrl+c) e start it again??? Or another thing???


Alexandre Suman de Araujo
asaraujo at if.sc.usp.br
UIN: 6194055
IFSC - USP - Sa~o Carlos - Brasil

More information about the gromacs.org_gmx-users mailing list