[gmx-users] Gromacs 5.0 compilation slower than 4.6.5. What wentwrong ?

Abhi Acharya abhi117acharya at gmail.com
Sat Sep 6 08:58:59 CEST 2014


Thank you Mark and Szilard for your replies. It gave more clarity on how
the new gromacs works,
especially in greater support for streamed computing.

I hope David's problem is sorted too. :)

Thanks again,

Regards,
Abhishek Acharya


On Fri, Sep 5, 2014 at 10:45 PM, Szilárd Páll <pall.szilard at gmail.com>
wrote:

> On Fri, Sep 5, 2014 at 6:40 PM, Abhishek Acharya
> <abhi117acharya at gmail.com> wrote:
> > Dear Mark,
> >
> > Thank you for the insightful reply.
> > In the manual for gromacs 5.0 it was mentioned that verlet scheme is
> better for GPU systems.
>
> More correctly, only the Verlet scheme supports GPU acceleration. The
> algorithms used by the group scheme are not appropriate for GPUs or
> other wide-SIMD accelerators.
>
> > Does that mean that we should give up on the group scheme scheme, even
> though we get good performance compared to verlet?
>
> That's up to you to decide. The algorithms are different, the group
> scheme does not use a buffer by default, while the verlet scheme does
> and aims to control the drift (and keep it quite low by default).
>
> > Future plan of removing group cut-off scheme indicates that it must have
> been associated with a high cost-benefit ratio.
>
> What makes you conclude that? The reasons are described here:
> http://www.gromacs.org/Documentation/Cut-off_schemes
>
> In very brief summary: i) the groups scheme is not suitable for
> accelerators and wide SIMD architectures ii)  energy conservation =
> high performance penalty iii) inconvenient for high parallalelization
> as it increases load imbalance
>
> Cheers,
> --
> Szilárd
>
> > Could you please shed little light on this  ?
> > Thanks.
> >
> > Regards,
> > Abhishek
> >
> > -----Original Message-----
> > From: "Mark Abraham" <mark.j.abraham at gmail.com>
> > Sent: ‎9/‎5/‎2014 7:57 PM
> > To: "Discussion list for GROMACS users" <gmx-users at gromacs.org>
> > Subject: Re: [gmx-users] Gromacs 5.0 compilation slower than 4.6.5. What
> wentwrong ?
> >
> > This cutoff-scheme difference is probably caused by using an .mdp file
> that
> > does not specify the cutoff scheme, and the default changed in 5.0.
> grompp
> > issued a note about this, if you go and check it. The change in the -npme
> > choice is a direct consequence of this; the heuristics underlying the
> > splitting choice approximately understand the relative performance
> > characteristics of the two implementations, and you can see that in
> > practice the reported PP/PME balance is decent in each case.
> >
> > There is indeed a large chunk of water (which you can see in group-scheme
> > log files e.g. the line in the FLOP accounting that says NB VdW & Elec.
> > [W3-W3,F] dominates the cost), and David's neighbour list is unbuffered.
> > This is indeed the regime where the group scheme might still out-perform
> > the Verlet scheme (depending whether you value buffering in the neighbour
> > list, which you generally should!).
> >
> > Mark
> >
> >
> > On Fri, Sep 5, 2014 at 4:06 PM, Abhi Acharya <abhi117acharya at gmail.com>
> > wrote:
> >
> >> Hello,
> >> Is you system solvated with water molecules?
> >>
> >> The reason I ask is that, in case of the run with 4.6.5 you gromacs has
> >> used a group cut-off scheme, whereas 5.0 has used verlet scheme. For
> system
> >> with water molecules, group scheme gives better performance than verlet.
> >>
> >> For more check out:
> >> http://www.gromacs.org/Documentation/Cut-off_schemes
> >>
> >> Regards,
> >> Abhishek Acharya
> >>
> >> On Fri, Sep 5, 2014 at 7:28 PM, Carsten Kutzner <ckutzne at gwdg.de>
> wrote:
> >>
> >> > Hi,
> >> >
> >> > you might want to use g_tune_pme to find out the optimal number
> >> > of PME nodes for 4.6 and 5.0.
> >> >
> >> > Carsten
> >> >
> >> >
> >> >
> >> > On 05 Sep 2014, at 15:39, David McGiven <davidmcgivenn at gmail.com>
> wrote:
> >> >
> >> > > What is even more strange is that I tried with 10 pme nodes (mdrun
> >> -ntmpi
> >> > > 48 -v -c TEST_md.gro -npme 16), got a 15,8% performance loss and
> ns/day
> >> > are
> >> > > very similar : 33 ns/day
> >> > >
> >> > > D.
> >> > >
> >> > > 2014-09-05 14:54 GMT+02:00 David McGiven <davidmcgivenn at gmail.com>:
> >> > >
> >> > >> Hi Abhi,
> >> > >>
> >> > >> Yes I noticed that imbalance but I thought gromacs knew better than
> >> the
> >> > >> user how to split PP/PME!!
> >> > >>
> >> > >> How is it possible that 4.6.5 guesses better than 5.0 ?
> >> > >>
> >> > >> Anyway, I tried :
> >> > >> mdrun -nt 48 -v -c test.out
> >> > >>
> >> > >> Exits with an error "You need to explicitly specify the number of
> MPI
> >> > >> threads (-ntmpi) when using separate PME ranks"
> >> > >>
> >> > >> Then:
> >> > >> mdrun -ntmpi 48 -v -c TEST_md.gro -npme 12
> >> > >>
> >> > >> Then again 35 ns/day with the warning :
> >> > >> NOTE: 8.5 % performance was lost because the PME ranks
> >> > >>      had less work to do than the PP ranks.
> >> > >>      You might want to decrease the number of PME ranks
> >> > >>      or decrease the cut-off and the grid spacing.
> >> > >>
> >> > >>
> >> > >> I don't know much about Gromacs so I am puzzled.
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >> 2014-09-05 14:32 GMT+02:00 Abhi Acharya <abhi117acharya at gmail.com
> >:
> >> > >>
> >> > >>> Hello,
> >> > >>>
> >> > >>> From the log files it is clear that out of 48 cores, the 5.0 run
> had
> >> 8
> >> > >>> cores allocated to PME while the 4.6.5 run had 12 cores. This
> seems
> >> to
> >> > >>> have
> >> > >>> caused a greater load imbalance in case of the 5.0 run.
> >> > >>>
> >> > >>> If you notice the last table in both .mdp files, you will notice
> that
> >> > the
> >> > >>> PME spread/gather wall time values for 5.0 is more than double the
> >> wall
> >> > >>> time value in case of the 4.6.5.
> >> > >>>
> >> > >>> Try running the simulation by explicitly setting the -npme flag as
> >> 12.
> >> > >>>
> >> > >>> Regards,
> >> > >>> Abhishek Acharya
> >> > >>>
> >> > >>>
> >> > >>> On Fri, Sep 5, 2014 at 4:43 PM, David McGiven <
> >> davidmcgivenn at gmail.com
> >> > >
> >> > >>> wrote:
> >> > >>>
> >> > >>>> Thanks Szilard, here it goes! :
> >> > >>>>
> >> > >>>> 4.6.5 : http://pastebin.com/nqBn3FKs
> >> > >>>> 5.0 : http://pastebin.com/kR4ntHtK
> >> > >>>>
> >> > >>>> 2014-09-05 12:47 GMT+02:00 Szilárd Páll <pall.szilard at gmail.com
> >:
> >> > >>>>
> >> > >>>>> mdrun writes a log file, named md.log by default, which contains
> >> > among
> >> > >>>>> other things results of hardware detection and performance
> >> > >>>>> measurements. The list does not accept attachments, please
> upload
> >> it
> >> > >>>>> somewhere (dropbox, pastebin, etc.) and post a link.
> >> > >>>>>
> >> > >>>>> Cheers,
> >> > >>>>> --
> >> > >>>>> Szilárd
> >> > >>>>>
> >> > >>>>>
> >> > >>>>> On Fri, Sep 5, 2014 at 12:37 PM, David McGiven <
> >> > >>> davidmcgivenn at gmail.com>
> >> > >>>>> wrote:
> >> > >>>>>> Command line in both cases is :
> >> > >>>>>> 1st :     grompp -f grompp.mdp -c conf.gro -n index.ndx
> >> > >>>>>> 2nd :    mdrun -nt 48 -v -c test.out
> >> > >>>>>>
> >> > >>>>>> Log file you mean the standard output/error ? Attached to the
> >> email
> >> > >>> ?
> >> > >>>>>>
> >> > >>>>>> Thanks
> >> > >>>>>>
> >> > >>>>>> 2014-09-05 12:30 GMT+02:00 Szilárd Páll <
> pall.szilard at gmail.com>:
> >> > >>>>>>
> >> > >>>>>>> Please post the command lines you used to invoke mdrun as
> well as
> >> > >>> the
> >> > >>>>>>> log files of the runs you are comparing.
> >> > >>>>>>>
> >> > >>>>>>> Cheers,
> >> > >>>>>>> --
> >> > >>>>>>> Szilárd
> >> > >>>>>>>
> >> > >>>>>>>
> >> > >>>>>>> On Fri, Sep 5, 2014 at 12:10 PM, David McGiven <
> >> > >>>> davidmcgivenn at gmail.com
> >> > >>>>>>
> >> > >>>>>>> wrote:
> >> > >>>>>>>> Dear Gromacs users,
> >> > >>>>>>>>
> >> > >>>>>>>> I just compiled gromacs 5.0 with the same compiler (gcc
> 4.7.2),
> >> > >>> same
> >> > >>>>> OS
> >> > >>>>>>>> (RHEL 6) same configuration options and basically everything
> >> > >>> than my
> >> > >>>>>>>> previous gromacs 4.6.5 compilation and when doing one of our
> >> > >>> typical
> >> > >>>>>>>> simulations, I get worst performance.
> >> > >>>>>>>>
> >> > >>>>>>>> 4.6.5 does 45 ns/day
> >> > >>>>>>>> 5.0 does 35 ns/day
> >> > >>>>>>>>
> >> > >>>>>>>> Do you have any idea of what could be happening ?
> >> > >>>>>>>>
> >> > >>>>>>>> Thanks.
> >> > >>>>>>>>
> >> > >>>>>>>> Best Regards,
> >> > >>>>>>>> D.
> >> > >>>>>>>> --
> >> > >>>>>>>> Gromacs Users mailing list
> >> > >>>>>>>>
> >> > >>>>>>>> * Please search the archive at
> >> > >>>>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> >> before
> >> > >>>>>>> posting!
> >> > >>>>>>>>
> >> > >>>>>>>> * Can't post? Read
> http://www.gromacs.org/Support/Mailing_Lists
> >> > >>>>>>>>
> >> > >>>>>>>> * For (un)subscribe requests visit
> >> > >>>>>>>>
> >> > >>>
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> >> > >>>> or
> >> > >>>>>>> send a mail to gmx-users-request at gromacs.org.
> >> > >>>>>>> --
> >> > >>>>>>> Gromacs Users mailing list
> >> > >>>>>>>
> >> > >>>>>>> * Please search the archive at
> >> > >>>>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> >> before
> >> > >>>>>>> posting!
> >> > >>>>>>>
> >> > >>>>>>> * Can't post? Read
> http://www.gromacs.org/Support/Mailing_Lists
> >> > >>>>>>>
> >> > >>>>>>> * For (un)subscribe requests visit
> >> > >>>>>>>
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> >> > >>> or
> >> > >>>>>>> send a mail to gmx-users-request at gromacs.org.
> >> > >>>>>>>
> >> > >>>>>> --
> >> > >>>>>> Gromacs Users mailing list
> >> > >>>>>>
> >> > >>>>>> * Please search the archive at
> >> > >>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> before
> >> > >>>>> posting!
> >> > >>>>>>
> >> > >>>>>> * Can't post? Read
> http://www.gromacs.org/Support/Mailing_Lists
> >> > >>>>>>
> >> > >>>>>> * For (un)subscribe requests visit
> >> > >>>>>>
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> >> > >>> or
> >> > >>>>> send a mail to gmx-users-request at gromacs.org.
> >> > >>>>> --
> >> > >>>>> Gromacs Users mailing list
> >> > >>>>>
> >> > >>>>> * Please search the archive at
> >> > >>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> before
> >> > >>>>> posting!
> >> > >>>>>
> >> > >>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> > >>>>>
> >> > >>>>> * For (un)subscribe requests visit
> >> > >>>>>
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> >> > or
> >> > >>>>> send a mail to gmx-users-request at gromacs.org.
> >> > >>>>>
> >> > >>>> --
> >> > >>>> Gromacs Users mailing list
> >> > >>>>
> >> > >>>> * Please search the archive at
> >> > >>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> before
> >> > >>>> posting!
> >> > >>>>
> >> > >>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> > >>>>
> >> > >>>> * For (un)subscribe requests visit
> >> > >>>>
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> >> or
> >> > >>>> send a mail to gmx-users-request at gromacs.org.
> >> > >>>>
> >> > >>> --
> >> > >>> Gromacs Users mailing list
> >> > >>>
> >> > >>> * Please search the archive at
> >> > >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> before
> >> > >>> posting!
> >> > >>>
> >> > >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> > >>>
> >> > >>> * For (un)subscribe requests visit
> >> > >>>
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> >> or
> >> > >>> send a mail to gmx-users-request at gromacs.org.
> >> > >>>
> >> > >>
> >> > >>
> >> > > --
> >> > > Gromacs Users mailing list
> >> > >
> >> > > * Please search the archive at
> >> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> > posting!
> >> > >
> >> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> > >
> >> > > * For (un)subscribe requests visit
> >> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> >> > send a mail to gmx-users-request at gromacs.org.
> >> >
> >> >
> >> > --
> >> > Dr. Carsten Kutzner
> >> > Max Planck Institute for Biophysical Chemistry
> >> > Theoretical and Computational Biophysics
> >> > Am Fassberg 11, 37077 Goettingen, Germany
> >> > Tel. +49-551-2012313, Fax: +49-551-2012302
> >> > http://www.mpibpc.mpg.de/grubmueller/kutzner
> >> > http://www.mpibpc.mpg.de/grubmueller/sppexa
> >> >
> >> > --
> >> > Gromacs Users mailing list
> >> >
> >> > * Please search the archive at
> >> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> > posting!
> >> >
> >> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> >
> >> > * For (un)subscribe requests visit
> >> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> > send a mail to gmx-users-request at gromacs.org.
> >> >
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-request at gromacs.org.
> >>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list