[gmx-users] Re: MPI scaling (was RE: MPI tips)

David Mathog mathog at caltech.edu
Wed Feb 1 20:14:28 CET 2006


> 
> You are running an extremely short number of timesteps, but increasing the
> system *size* to make the simulations take longer. That most likely means
> you're *mostly* increasing the overhead involved in setting up the system.
> So of course scaling is abysmal -- you're still running trivially short
> calculations, just *really big* trivially short calculations. Try running
> reasonable sized calculations that are long (MORE STEPS) and then
check out
> scaling.

I don't really understand your reasoning for increasing the number
of steps since the nodes have to do some sort of communication at
each step, and whatever the ratio of that time to computation time
per step is, it shouldn't vary with the number of steps.

In any case, I changed nsteps from 100 to 200.  Time ratios for
two versus  one processors were: (T_2/T_1)

Steps=100  502/695  = .7223
Steps=200  978/1368 = .7149

The difference is in the direction you predicted but it's so
small that it might just be noise.

It still looks to me like the primary scaling problem is that
for N>1 nodes N-1 nodes eventually stall and wait for the slowest
node to finish.  That's why the CPU time per node falls as N
increases.  Another way of saying that is that the problem is not
being divided evenly (by total compute time) resulting in a load
imbalance.  But there must be more to it than since they don't
ever run at 99% cpu and then stall completely but rather limp
along at 10% CPU or something like that.

I also went down and watched the switch for the N=2 case.  Nodes
1 and 2 were flashing a bit but the network traffic involving these
nodes was not at all heavy.  So for 2 nodes at least the ethernet
is not rate limiting.

> And, as already suggested, you may want to do this on one of the
> standard benchmark systems that was  just pointed to.

I'll try it on something more closely resembling the problem we
really need to run.  It doesn't matter much to me if the benchmarks
run fast if the problem we need doesn't.

Thanks,

David Mathog
mathog at caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech



More information about the gromacs.org_gmx-users mailing list