[gmx-users] Performance of 4.6.1 vs. 4.5.5

Szilárd Páll szilard.pall at cbr.su.se
Sat Mar 9 16:08:03 CET 2013


As Mark said, we need concrete details to answer the question:
- log files (all four of them: 1/2 nodes, 4.5/4.6)
- hardware (CPUs, network)
- compilers
The 4.6 log files contain much of the second and third point except the
network.

Note that you can compare the performance summary table's entries one by
one and see what has changed.

I suspect that the answer is simply load imbalance, but we'll have to see
the numbers to know.


--
Szilárd


On Sat, Mar 9, 2013 at 3:00 PM, Mark Abraham <mark.j.abraham at gmail.com>wrote:

> On Sat, Mar 9, 2013 at 6:53 AM, Christopher Neale <
> chris.neale at mail.utoronto.ca> wrote:
>
> > Dear users:
> >
> > I am seeing a 140% performance boost when moving from gromacs 4.5.5 to
> > 4.6.1 when I run a simulation on a single node. However, I am "only"
> seeing
> > a 110% performance boost when running on multiple nodes. Does anyone else
> > see this? Note that I am not using the verlet cutoff scheme.
> >
>
> What's the processor and network for those runs?
>
> I'm not sure that this is a problem, but I was surprised to see how big the
> > difference was between 1 and 2 nodes, while for 2-10 nodes I saw a
> reliable
> > 10% performance boost.
> >
>
> Not sure what you mean by "reliable 10% performance boost." Reporting
> actual ns/day rates would be clearer. Is a "140% performance boost" a
> factor of 1.4 more ns/day or a factor of 2.4 more ns/day?
>
> Please note that, while I compiled the fftw (with sse2) and gromacs 4.6.1,
> > I did not compile the 4.5.5 version that I am comparing to (or its fftw)
> so
> > the difference might be in compilation options.
>
>
> Indeed.
>
>
> > Still, I wonder why the benefits of 4.6.1 are so fantastic on 1 node but
> > fall off to good-but-not-amazing on > 1 node.
> >
>
> Finding the answer would start by examining the changes in the timing
> breakdowns in your .log files. Switching from using in-memory MPI to
> network MPI is a significant cost on busy/weak networks.
>
> The system is about 43K atoms. I have not tested this with other systems or
> > cutoffs.
> >
> > My mdp file follows. Thank you for any advice.
> >
>
> Your system is probably not calculating energies very much. 4.6 uses
> force-only kernels if that's all you need from it.
>
> Mark
>
> Chris.
> >
> > constraints = all-bonds
> > lincs-iter =  1
> > lincs-order =  6
> > constraint_algorithm =  lincs
> > integrator = sd
> > dt = 0.002
> > tinit = 0
> > nsteps = 2500000000
> > nstcomm = 1
> > nstxout = 2500000000
> > nstvout = 2500000000
> > nstfout = 2500000000
> > nstxtcout = 50000
> > nstenergy = 50000
> > nstlist = 10
> > nstlog=0
> > ns_type = grid
> > vdwtype = switch
> > rlist = 1.0
> > rlistlong = 1.6
> > rvdw = 1.5
> > rvdw-switch = 1.4
> > rcoulomb = 1.0
> > coulombtype = PME
> > ewald-rtol = 1e-5
> > optimize_fft = yes
> > fourierspacing = 0.12
> > fourier_nx = 0
> > fourier_ny = 0
> > fourier_nz = 0
> > pme_order = 4
> > tc_grps             =  System
> > tau_t               =  1.0
> > ld_seed             =  -1
> > ref_t = 310
> > gen_temp = 310
> > gen_vel = yes
> > unconstrained_start = no
> > gen_seed = -1
> > Pcoupl = berendsen
> > pcoupltype = semiisotropic
> > tau_p = 4 4
> > compressibility = 4.5e-5 4.5e-5
> > ref_p = 1.0 1.0
> > dispcorr = EnerPres
> >
> > --
> > gmx-users mailing list    gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



More information about the gromacs.org_gmx-users mailing list