[gmx-users] Performance degradation for Verlet cutoff-scheme compared to group

Олег Титов titovoi at qsar.chem.msu.ru
Tue Oct 14 00:21:17 CEST 2014


Thank you, Szilárd, for this list of options!

2014-10-13 22:00 GMT+04:00 Szilárd Páll <pall.szilard at gmail.com>:

> Hi,
>
> As the log file points out, you picked a combinations of settings that
> do not have corresponding optimized non-bonded kernels:
>
> "LJ-PME with Lorentz-Berthelot is not supported with SIMD kernels,
> falling back to plain-C kernels
>
> Using plain C 4x4 non-bonded kernels
>
> WARNING: Using the slow plain C kernels. This should
> not happen during routine usage on supported platforms."
>
> Otherwise, if you switch to a set of options that have SIMD optimized
> kernels, you'll see that the performance difference is much less -
> especially if you compare single-node OpenMP-only runs with group
> scheme thread-MPI runs.
>
> You have three options:
> - Switch to using geometric combination rules instead of LB if you are
> OK with the (rather small) approximation error;
> - Use GPUs where LJ-PME with LB combination rules is supported;
> - Use the group scheme.
> [ Bonus option: Implement LB combination rules for the Verlet SIMD
> kernels. ]
>
> Cheers,
> --
> Szilárd
>
>
> On Mon, Oct 13, 2014 at 5:37 PM, Олег Титов <titovoi at qsar.chem.msu.ru>
> wrote:
> > Good day,
> >
> > Here is the .log file
> > https://drive.google.com/file/d/0B3p_uIrSkPysS2kweC1QWUVfaTA
> >
> > Regards
> > Oleg Titov
> >
> > 2014-10-13 13:31 GMT+04:00 Mark Abraham <mark.j.abraham at gmail.com>:
> >
> >> Hi,
> >>
> >> Very likely that simulation is too small to make good use of that many
> >> cores (though if there's hyper-threading going on, then there may only
> be 8
> >> real cores, and you may do better with hyper-threading off...). Gromacs
> use
> >> of OpenMP does not do very well with lots of threads and few atoms. If
> you
> >> share a link to a log file on a file-sharing service, there might be
> >> relevant observations people could make. If running multiple replicates
> of
> >> your simulation is a sensible thing to do, you will likely get much
> better
> >> value from your hardware by running several such simulations per node
> (e.g.
> >> with mdrun_mpi -multi)
> >>
> >> Mark
> >>
> >> On Mon, Oct 13, 2014 at 10:55 AM, Олег Титов <titovoi at qsar.chem.msu.ru>
> >> wrote:
> >>
> >> > Thanks for you reply.
> >> >
> >> > On a different machine I've recompiled GROMACS with icc version 13.1.0
> >> (gcc
> >> > version 4.4.6 compatibility).
> >> > cmake command:
> >> > I_MPI_CC=nvcc CC=icc CXX=icpc cmake ../gromacs-src
> >> > -DCMAKE_INSTALL_PREFIX=~/gromacs-5.0.2/ -DGMX_FFT_LIBRARY=MKL
> >> -DGMX_MPI=OFF
> >> > -DGMX_GPU=OFF -DGMX_BUILD_MDRUN_ONLY=ON -DGMX_DEFAULT_SUFFIX=OFF
> >> > -DGMX_BINARY_SUFFIX=_intel -DGMX_LIBS_SUFFIX=_intel
> >> >
> >> > This simulation took 10 hours on 16 CPU cores (15.5 ns/day). Is this
> >> normal
> >> > performance for a system with 1641 atoms?
> >> >
> >> > 2014-10-11 1:21 GMT+04:00 Mark Abraham <mark.j.abraham at gmail.com>:
> >> >
> >> > > On Fri, Oct 10, 2014 at 8:16 PM, Олег Титов <
> titovoi at qsar.chem.msu.ru>
> >> > > wrote:
> >> > >
> >> > > > Good day.
> >> > > >
> >> > > > I've got sigmificant performance degradation when trying to use
> >> Verlet
> >> > > > cutoff-scheme for free energy calculation. My system contains 1
> >> > > > bromobenzene molecule and 543 TIP3P waters. 16 hours of
> calculation
> >> on
> >> > > > 8-core CPU resulted in 2.5 ns trajectory. With group scheme i've
> got
> >> > 6.5
> >> > > ns
> >> > > > trajectory in 7h 40 min.
> >> > > >
> >> > > > Interesting that with group scheme gromacs tells me that it was
> >> using 8
> >> > > MPI
> >> > > > threads while with Verlet - 1 MPI and with 8 OpenMP threads. Both
> >> times
> >> > > > I've launched the calculation with "mdrun -deffnm test".
> >> > > >
> >> > >
> >> > > One scheme supports OpenMP, one doesn't...
> >> > >
> >> > >
> >> > > > I believe that this is caused by "plain C kernels" as GROMACS
> warns
> >> me.
> >> > > Is
> >> > > > there any possibility to overcome this issue?
> >> > > >
> >> > >
> >> > > Yes...
> >> > >
> >> > >
> >> > > > I have access only to old gcc 4.1.2 (without SSE4.1) and buggy icc
> >> 11.1
> >> > > on
> >> > > > this machine.
> >> > > >
> >> > >
> >> > > Gromacs requires a real compiler to get performance, like the
> install
> >> > guide
> >> > > says. You need to get one.
> >> > >
> >> > > Mark
> >> > >
> >> > >
> >> > > > Thanks for your help.
> >> > > >
> >> > > > Oleg Titov
> >> > > > --
> >> > > > Gromacs Users mailing list
> >> > > >
> >> > > > * Please search the archive at
> >> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> before
> >> > > > posting!
> >> > > >
> >> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> > > >
> >> > > > * For (un)subscribe requests visit
> >> > > >
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> >> or
> >> > > > send a mail to gmx-users-request at gromacs.org.
> >> > > >
> >> > > --
> >> > > Gromacs Users mailing list
> >> > >
> >> > > * Please search the archive at
> >> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> > > posting!
> >> > >
> >> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> > >
> >> > > * For (un)subscribe requests visit
> >> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> >> > > send a mail to gmx-users-request at gromacs.org.
> >> > >
> >> > --
> >> > Gromacs Users mailing list
> >> >
> >> > * Please search the archive at
> >> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> > posting!
> >> >
> >> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> >
> >> > * For (un)subscribe requests visit
> >> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> > send a mail to gmx-users-request at gromacs.org.
> >> >
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-request at gromacs.org.
> >>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list