[gmx-users] MPI scaling (was RE: MPI tips)

Mark Abraham Mark.Abraham at anu.edu.au
Wed Feb 1 02:07:37 CET 2006


David Mathog wrote:
> Presumably I'm doing something wrong here but so far the
> gromacs MPI performance has been abysmal. Gromacs 3.3, lam-mpi
> 7.1.1, 100baseT switched network, 20 compute nodes (max).  Gromacs
> was built shared and the relevant .so libraries are all in
> /usr/common/lib which shows up as that location on all nodes.
> Additionally that path is in /etc/ld.so.conf and ldconfig was
> run on all nodes after gromacs was set up with "make install".
> 
> It was suggested that the gmxdemo example was too small so today
> I tried changing the original -d .5 value used with editconf to
> -d 2, -d 4, and finally -d 8.  Details are:

It was also suggested that the simulations are too short. There is 
overhead in setting up the MPI system, as well as in each communication. 
You want to run benchmarks that aren't dominated by this setup time. I 
suggest looking on the gromacs web page for the benchmarks section and 
running the benchmark systems you can get from there. Then you will have 
a basis for comparison with the results there, and people here will have 
more confidence that your problem isn't you making an error through 
inexperience with gromacs.

> Are others seeing better CPU utilization with the MPI
> version of mdrun?  (Run something for a couple of minutes, do gstat,
> and look at the 1 minute load column.)

I did the standard benchmark on my main machine on a 3.3 beta back last 
July (see 
http://www.gromacs.org/pipermail/gmx-users/2005-July/016104.html) and it 
was good out to 16-32 processors on that hardware. My interconnects are 
a lot better than 10baseT however.

Mark



More information about the gromacs.org_gmx-users mailing list