[gmx-developers] Re: Gromacs 3.3.1 parallel benchmarking

Axel Kohlmeyer akohlmey at cmm.chem.upenn.edu
Tue Aug 15 19:13:34 CEST 2006


On 8/15/06, Michael Haverty <mghav at yahoo.com> wrote:
> Thanks for the feedback all.

[...]

> My execution of growmmp and mdrun has been very simple
> and just using the "-np number_of_processors" flags
> except in the case of the shared memory machines where
> I used "-np number_of_processors -nt
> number_of_processors" for the mdrun flags.  I've also

wait... does that mean on 4 processors, you run the
job with mpirun -np 4 mdrun -np 4 -nt 4 ?
it should be mpirun -np 4 mdrun -mp 4 -nt 1

[...]

> but we've upgraded things such as the switch to
> gigabit so that we could get better scaling and
> learned to run within switch to get good scaling with
> DFT codes up to the 40-60 processor range.  We're

what kind of DFT codes? with plane wave codes,
i would doubt that. i have some rather old data posted here:
http://www.theochem.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-bench.html#parallel

you should keep in mind, that with better serial performance
scaling becomes more of a problem.

on the topic of OS jitter or OS noise you may want to have a look at e.g.,
http://www.linux-mag.com/content/view/2278/

for a linux cluster it is probably not feasable to eliminate it
completely, but if you have the time and people willing to
look into it, i'd be very curious to see whether reducing the
number of daemon processes to the absolute minimum would
make a significant difference. especially in this case, where
you have a very latency (and thus OS noise) sensitive application.

axel.

p.s.: depending on how much willing you are to take a risk, you
may also want to try out the domain decomposition scheme in the
current gromacs cvs. that should be particularly helpful for your
huge system calculations. since our group here is consindering to use
gromacs for similar purposes, i'd be very interested to learn about
experiences with that, too.

> starting to think it may be operating system issues,
> so we're going to meet with computing support later
> today to explore that.
>
> Mike


-- 
=======================================================================
Axel Kohlmeyer   akohlmey at cmm.chem.upenn.edu   http://www.cmm.upenn.edu
  Center for Molecular Modeling   --   University of Pennsylvania
Department of Chemistry, 231 S.34th Street, Philadelphia, PA 19104-6323
tel: 1-215-898-1582,  fax: 1-215-573-6233,  office-tel: 1-215-898-5425
=======================================================================
If you make something idiot-proof, the universe creates a better idiot.



More information about the gromacs.org_gmx-developers mailing list