[gmx-users] MPI scaling (was RE: MPI tips)

David Mobley dmobley at gmail.com
Wed Feb 1 18:05:00 CET 2006


>
> > It was also suggested that the simulations are too short.
>
> You lost me there.  The number of steps was the same, the times went up
> and up and up, but the scaling didn't improve all that much.  I also
> tried (since my last post) adding coulombtype=cut-off to the parameter
> files but that made no difference whatsoever.  Using the alternative
> Scaling formula of:


You are running an extremely short number of timesteps, but increasing the
system *size* to make the simulations take longer. That most likely means
you're *mostly* increasing the overhead involved in setting up the system.
So of course scaling is abysmal -- you're still running trivially short
calculations, just *really big* trivially short calculations. Try running
reasonable sized calculations that are long (MORE STEPS) and then check out
scaling. And, as already suggested, you may want to do this on one of the
standard benchmark systems that was  just pointed to.



> It's 100baseT but your point is still valid.  Since there was no
> functional demo_mpi provided and I had to write one myself I'm wondering
> if I might not have some mpirun parameter set appropriate for lam-mpi
> running with this program.


Again, it looks most likely that you're simply running really short (that
is, FEW TIMESTEPS) runs which are dominated by the system setup time.

I would point out that anytime I submit even a non-MPI job to the queue
here, it takes a few seconds (sometimes 30-60) for it to start. If I only
run a few timesteps, I would conclude it's extremely slow. I'm pretty sure
that the bigger the system, the longer this initial overhead time before it
starts doing anything. And this is for non-MPI jobs; the overhead is
probably larger for MPI jobs. So I still don't see why you think running
really short simulations of just a few timesteps should give you good
benchmarks. If you want something really easy to set up, just put a protein
in a box of water, or something, and then try running it for an hour or two.

David



Thanks,
>
> David Mathog
> mathog at caltech.edu
> Manager, Sequence Analysis Facility, Biology Division, Caltech
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20060201/6783f64b/attachment.html>


More information about the gromacs.org_gmx-users mailing list