[gmx-users] Pull code slows down mdrun -- is this expected?

Mark Abraham mark.j.abraham at gmail.com
Tue May 17 23:13:04 CEST 2016


Hi,


On Tue, May 17, 2016 at 10:41 PM Christopher Neale <
chris.neale at alum.utoronto.ca> wrote:

> Dear Mark:
>
> GROMACS-2016-beta1 is not any faster for this test case when using CPUs
> only and my previous usage, which was "gmx mdrun -ntmpi 4 -ntomp 6 -dlb yes
> -npme 0 -notunepme ..."
>

OK thanks. With CPUs only, typical OpenMP scaling improvements are only
seen with (2 or maybe 4) threads, assuming one's system can support many
domains (which probably yours can).

I was so far unable to compile this beta version for GPUs (somehow my gcc
> 4.4.7 can't find libstdc++ and the next


Yes, we now require C++11, so at least gcc 4.6. Was the error message not
clear enough about that?


> version up that I have at the moment if gcc 5.1.0, which works for the CPU
> only compilation but it >4.9 so I cant use it with cuda).


This is something your cluster admins should have handled already. If CUDA
is installed, then there should be a compiler that it supports already
installed.


> In any event, it seems unlikely that an optimization with openMP
> parallelization would only work with the GPU code, so it *appears* as if
> the slowdown that I see is not helped by this new parallelization
>

You may well be right in practice, but there are complex effects. If the
CPU code path was limiting (as is likely) then making serial parts run
faster with OpenMP does indeed improve overall throughput with GPUs to a
much greater extent than with CPUs, because the total length of the time
step has gone down because of the extra hardware and parallelism.

I opened http://redmine.gromacs.org/issues/1963 to be a place where I hope
we can gather some .tpr files and observations. There are people who might
do profiling analyses on behalf of developers, and such inputs are some of
what we need.

Mark

Thank you,
> Chris.
>
> ________________________________________
> From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <
> gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of Mark
> Abraham <mark.j.abraham at gmail.com>
> Sent: 17 May 2016 15:40:55
> To: gmx-users at gromacs.org
> Subject: Re: [gmx-users] Pull code slows down mdrun -- is this expected?
>
> Hi,
>
> Yes, it is likely that adding more and more calls to code that runs in
> serial will have the kinds of effect you see. The good news is that Berk
> added some OpenMP parallelization to the pull code for the the 2016
> release, which is in the recent beta. If you're prepared to try that out
> for speed, it would be much appreciated.
>
> It would also be interesting for me (at least) if you could share such a
> test case, so we can consider how best to implement future improvements
> that are relevant for things people actually want to do.
>
> Cheers,
>
> Mark
>
> On Tue, May 17, 2016 at 9:30 PM Christopher Neale <
> chris.neale at alum.utoronto.ca> wrote:
>
> > These benchmarking numbers are from gromacs v5.1.2. Forgot to mention
> that
> > in my initial post.
> >
> > ________________________________________
> > From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <
> > gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of
> > Christopher Neale <chris.neale at alum.utoronto.ca>
> > Sent: 17 May 2016 15:27:37
> > To: gmx-users at gromacs.org
> > Subject: [gmx-users] Pull code slows down mdrun -- is this expected?
> >
> > Dear Users:
> >
> > I am writing to ask if it is expected that the pull code slows down
> > gromacs in such a way that a single pull group has fairly minor effect,
> but
> > many groups collectively really bring down the throughput. Based on
> Mark's
> > response to my previous post about the free energy code slowing things
> > down, I'm guessing this is about non-optimized kernels, but I wanted to
> > make sure.
> >
> > Specifically, I have an aqueous lipid bilayer system and I was trying to
> > keep it fairly planar by using the pull code in the z-dimension only to
> > restrain the headgroup phosphorus to a specified distance from the
> bilayer
> > center, using a seperate pull-coord for each lipid.
> >
> > Without any such restraints, I get 55 ns/day with GPU/CPU execution.
> > However, if I add 1, 2, 4, 16, 64, or 128 pull code restraints, then the
> > speed goes to 52, 51, 50, 45, 32, and 22 ns/day respectively. That is
> using
> > pull-coord*-geometry = distance. If I use the cylinder geometry, things
> are
> > even worse: 51, 48, 44, 29, 14, and 9 ns/day for the same respective
> > numbers of pull restraints.
> >
> > I have also tested that the same slowdown exists on CPU-only runs. Here,
> > without the pull code I get 19 ns/day and with 2, 4, 16, 64, or 128 pull
> > code restraints I get 19, 18, 18, 15, 9, and 6 ns/day respectively.
> >
> > In case it matters, my usage is like this for a single restraint and
> > analogous for more restraints:
> >
> > pull=yes
> > pull-ncoords = 1
> > pull-ngroups = 2
> > pull-group1-name = DPPC
> >
> > pull-coord1-geometry = distance
> > pull-coord1-type = flat-bottom
> > pull-coord1-vec = 0 0 1
> > pull-coord1-start = no
> > pull-coord1-init = 2.5
> > pull-coord1-k = 1000
> > pull-coord1-dim = N N Y
> > pull-group2-name = DPPC_&_P_&_r_1
> > pull-coord1-groups = 1 2
> >
> > ** note that I modified the source code to give a useful flat-bottom
> > restraint for my usage, but I benchmarked also with the unmodified code
> so
> > the timings have nothing to do with the modified code that I will
> > eventually use.
> >
> > Thank you,
> > Chris.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list