[gmx-users] Pull code slows down mdrun -- is this expected?

Szilárd Páll pall.szilard at gmail.com
Wed May 18 02:49:20 CEST 2016


Hi,

Could it be that you were running few threads/rank with CPU-only runs and
many threads/rank with GPU runs? If you rely on domain-decomposition,
you're getting the pull work decomposed too and that will achieve some
parallelization of this work.

Also, if you look at log files you should actually see in the performance
table the pull cost recorded. That will be a somewhat better indicator of
the underlying scaling/performance behavior.

Last, do share logs, they're often easier to parse than summaries of their
content. :)

Cheers,

--
Szilárd
On Tue, May 17, 2016 at 11:39 PM, Christopher Neale <
chris.neale at alum.utoronto.ca> wrote:

> Dear Mark:
>
> I guess the error message was fairly clear in retrospect. It asked me to
> use a newer compiler, but since I knew I had to use gcc<=4.9 I didn't
> immediately realize that there was a version >4.4 and <=4.9 that had what I
> needed (see full error message at the end of this post). The indicated
> website ( http://manual.gromacs.org/documentation/2016-beta1/index.html )
> also doesn't have anything clear about necessary version of gcc (the stuff
> about 4.7 and later seemed to be all about performance on AMD...)
>
> As for sysadmins.... you realize that we don't all have those right? I
> compiled gromacs 5.1.2 with gcc 4.4.7 and cuda 6.5.14 and it seems to work
> fine for that version. Anyway, I suppose I can figure out how to compile a
> compiler.
>
> You are right that my GPU runs were (severely) CPU limited, so hopefully
> I'll see a speedup there.
>
> I'll upload my setup to that redmine soon. Thank you again for all of your
> help,
>
> Chris.
>
> ### The compilation error message:
>
> -- Performing Test CXX11_SUPPORTED - Failed
> CMake Error at cmake/gmxTestCXX11.cmake:86 (message):
>   This version of GROMACS requires a C++11 compiler.  Please use a newer
>   compiler or use the GROMACS 5.1.x release.  See the installation guide
> for
>   details.
> Call Stack (most recent call first):
>   CMakeLists.txt:164 (gmx_test_cxx11)
>
> ________________________________________
> From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <
> gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of Mark
> Abraham <mark.j.abraham at gmail.com>
> Sent: 17 May 2016 17:12:52
> To: gmx-users at gromacs.org
> Subject: Re: [gmx-users] Pull code slows down mdrun -- is this expected?
>
> Hi,
>
>
> On Tue, May 17, 2016 at 10:41 PM Christopher Neale <
> chris.neale at alum.utoronto.ca> wrote:
>
> > Dear Mark:
> >
> > GROMACS-2016-beta1 is not any faster for this test case when using CPUs
> > only and my previous usage, which was "gmx mdrun -ntmpi 4 -ntomp 6 -dlb
> yes
> > -npme 0 -notunepme ..."
> >
>
> OK thanks. With CPUs only, typical OpenMP scaling improvements are only
> seen with (2 or maybe 4) threads, assuming one's system can support many
> domains (which probably yours can).
>
> I was so far unable to compile this beta version for GPUs (somehow my gcc
> > 4.4.7 can't find libstdc++ and the next
>
>
> Yes, we now require C++11, so at least gcc 4.6. Was the error message not
> clear enough about that?
>
>
> > version up that I have at the moment if gcc 5.1.0, which works for the
> CPU
> > only compilation but it >4.9 so I cant use it with cuda).
>
>
> This is something your cluster admins should have handled already. If CUDA
> is installed, then there should be a compiler that it supports already
> installed.
>
>
> > In any event, it seems unlikely that an optimization with openMP
> > parallelization would only work with the GPU code, so it *appears* as if
> > the slowdown that I see is not helped by this new parallelization
> >
>
> You may well be right in practice, but there are complex effects. If the
> CPU code path was limiting (as is likely) then making serial parts run
> faster with OpenMP does indeed improve overall throughput with GPUs to a
> much greater extent than with CPUs, because the total length of the time
> step has gone down because of the extra hardware and parallelism.
>
> I opened http://redmine.gromacs.org/issues/1963 to be a place where I hope
> we can gather some .tpr files and observations. There are people who might
> do profiling analyses on behalf of developers, and such inputs are some of
> what we need.
>
> Mark
>
> Thank you,
> > Chris.
> >
> > ________________________________________
> > From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <
> > gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of Mark
> > Abraham <mark.j.abraham at gmail.com>
> > Sent: 17 May 2016 15:40:55
> > To: gmx-users at gromacs.org
> > Subject: Re: [gmx-users] Pull code slows down mdrun -- is this expected?
> >
> > Hi,
> >
> > Yes, it is likely that adding more and more calls to code that runs in
> > serial will have the kinds of effect you see. The good news is that Berk
> > added some OpenMP parallelization to the pull code for the the 2016
> > release, which is in the recent beta. If you're prepared to try that out
> > for speed, it would be much appreciated.
> >
> > It would also be interesting for me (at least) if you could share such a
> > test case, so we can consider how best to implement future improvements
> > that are relevant for things people actually want to do.
> >
> > Cheers,
> >
> > Mark
> >
> > On Tue, May 17, 2016 at 9:30 PM Christopher Neale <
> > chris.neale at alum.utoronto.ca> wrote:
> >
> > > These benchmarking numbers are from gromacs v5.1.2. Forgot to mention
> > that
> > > in my initial post.
> > >
> > > ________________________________________
> > > From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <
> > > gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of
> > > Christopher Neale <chris.neale at alum.utoronto.ca>
> > > Sent: 17 May 2016 15:27:37
> > > To: gmx-users at gromacs.org
> > > Subject: [gmx-users] Pull code slows down mdrun -- is this expected?
> > >
> > > Dear Users:
> > >
> > > I am writing to ask if it is expected that the pull code slows down
> > > gromacs in such a way that a single pull group has fairly minor effect,
> > but
> > > many groups collectively really bring down the throughput. Based on
> > Mark's
> > > response to my previous post about the free energy code slowing things
> > > down, I'm guessing this is about non-optimized kernels, but I wanted to
> > > make sure.
> > >
> > > Specifically, I have an aqueous lipid bilayer system and I was trying
> to
> > > keep it fairly planar by using the pull code in the z-dimension only to
> > > restrain the headgroup phosphorus to a specified distance from the
> > bilayer
> > > center, using a seperate pull-coord for each lipid.
> > >
> > > Without any such restraints, I get 55 ns/day with GPU/CPU execution.
> > > However, if I add 1, 2, 4, 16, 64, or 128 pull code restraints, then
> the
> > > speed goes to 52, 51, 50, 45, 32, and 22 ns/day respectively. That is
> > using
> > > pull-coord*-geometry = distance. If I use the cylinder geometry, things
> > are
> > > even worse: 51, 48, 44, 29, 14, and 9 ns/day for the same respective
> > > numbers of pull restraints.
> > >
> > > I have also tested that the same slowdown exists on CPU-only runs.
> Here,
> > > without the pull code I get 19 ns/day and with 2, 4, 16, 64, or 128
> pull
> > > code restraints I get 19, 18, 18, 15, 9, and 6 ns/day respectively.
> > >
> > > In case it matters, my usage is like this for a single restraint and
> > > analogous for more restraints:
> > >
> > > pull=yes
> > > pull-ncoords = 1
> > > pull-ngroups = 2
> > > pull-group1-name = DPPC
> > >
> > > pull-coord1-geometry = distance
> > > pull-coord1-type = flat-bottom
> > > pull-coord1-vec = 0 0 1
> > > pull-coord1-start = no
> > > pull-coord1-init = 2.5
> > > pull-coord1-k = 1000
> > > pull-coord1-dim = N N Y
> > > pull-group2-name = DPPC_&_P_&_r_1
> > > pull-coord1-groups = 1 2
> > >
> > > ** note that I modified the source code to give a useful flat-bottom
> > > restraint for my usage, but I benchmarked also with the unmodified code
> > so
> > > the timings have nothing to do with the modified code that I will
> > > eventually use.
> > >
> > > Thank you,
> > > Chris.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list