[gmx-users] Setting rcon according to system

Mark Abraham mark.j.abraham at gmail.com
Fri Nov 16 17:01:20 CET 2018


Hi,

On Fri, Nov 16, 2018 at 2:42 AM Sergio Perez <sperezconesa at gmail.com> wrote:

> Using lincs-order = 3 I get:
>
> Initializing Domain Decomposition on 100 ranks
> Dynamic load balancing: locked
> Initial maximum inter charge-group distances:
>    two-body bonded interactions: 0.470 nm, Tab. Bonds NC, atoms 10 13
> Minimum cell size due to bonded interactions: 0.000 nm
> Maximum distance for 4 constraints, at 120 deg. angles, all-trans: 0.745 nm
>

Clearly this has changed. But looking more closely at the code, I think
this means that there are more bonded interactions than you've suggested
there are. Can you have another look at that. Otherwise, using a smaller
lincs-order isn't valid unless you increase lincs-iter per the docs.


> Estimated maximum distance required for P-LINCS: 0.745 nm
> This distance will limit the DD cell size, you can override this with -rcon
> Guess for relative PME load: 0.04
> Will use 90 particle-particle and 10 PME only ranks
> This is a guess, check the performance at the end of the log file
> Using 10 separate PME ranks, as guessed by mdrun
> Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> Optimizing the DD grid for 90 cells with a minimum initial size of 0.931 nm
> The maximum allowed number of cells is: X 5 Y 4 Z 4
>
> -------------------------------------------------------
> Program:     mdrun_mpi, version 2018.1
> Source file: src/gromacs/domdec/domdec.cpp (line 6571)
> MPI rank:    0 (out of 100)
>
> Fatal error:
> There is no domain decomposition for 90 ranks that is compatible with the
> given box and a minimum cell size of 0.930681 nm
> Change the number of ranks or mdrun option -rcon or -dds or your LINCS
> settings
> Look in the log file for details on the domain decomposition
>
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> -------------------------------------------------------
>
> The best I think I can do is go down to 80 processors with -rcon 0.7
>

Maybe, caveat above.


> I would be a good idea to give at least a note about the lincs-order. But I
>

Having looked at the code, it does walk the bonded network to find the
minimum range required. So I think your output is only consistent with
systems that actually have multiple consecutive bonds. And thus the
behaviour of the domain decomposition is not artificially excessive.

think most importantly a comment in the online mdp options (or the
> performance part) website would help.
>
> Going somewhat off topic, I think gromacs should make an effort to generate
> a cohesive web-based united manual. A great example is the one of PLUMED.
> Gromacs is a wonderfull program but I feel that the organization of its
> documentation, tutorials etc. is so dispersed that it creates a learning
> curve that could be avoided.
>

Thanks for the feedback. We are working on that. The 2019 version will
unite the user guide and reference manual into a single sphinx-based format
that we can have cross references to and from, working both as HTML and PDF
- see http://manual.gromacs.org/documentation/2019-beta2/index.html. We
still need to migrate content in from the old website and retire it all,
which will help cut down on such confusion.

Thanks!

Mark

Thanks a lot!
> Sergio
>
>
>
> On Thu, Nov 15, 2018 at 9:45 PM Mark Abraham <mark.j.abraham at gmail.com>
> wrote:
>
> > Hi,
> >
> > Ah I see. So unless your hydrated uranyl is modelled with bonded
> > interactions between uranyl atoms and water atoms, the only bonds in the
> > system are silicate hydroxyl, water, and uranyl. If so, then I suspect
> the
> > default value of lincs-order (which is 4, to suit highly connected
> > biomolecular use cases) is too high for the actual connectivity you have.
> > Reducing that to 3 will relax the minimum diameter that the domain
> > decomposition requires, which I feel is a more stable approach than
> > modifying -rcon. How does that work for you?
> >
> > Perhaps we should automate such a check in grompp, to cater for such
> weakly
> > connected use cases.
> >
> > Mark
> >
> > On Thu, Nov 15, 2018 at 3:25 AM Sergio Perez <sperezconesa at gmail.com>
> > wrote:
> >
> > > Actually the clay has the clayFF force which has only bonds on OH
> units,
> > > the rest of atoms are just LJ spheres with a charge. I guess the
> > conclusion
> > > is still the same?
> > >
> > > On Wed, Nov 14, 2018 at 8:47 PM Mark Abraham <mark.j.abraham at gmail.com
> >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > On Wed, Nov 14, 2018 at 3:18 AM Sergio Perez <sperezconesa at gmail.com
> >
> > > > wrote:
> > > >
> > > > > Hello,
> > > > > First of all thanks for the help :)
> > > > > I don't necessarily need to run it with 100 processors, I just want
> > to
> > > > know
> > > > > how much I can reduce rcon taking into account the knowledge of my
> > > system
> > > > > without compromising the accuracy. Let me give some more details of
> > my
> > > > > system. The system is a sodium montmorillonite clay with two solid
> > > > > alumino-silicate layers with two aqueous interlayers between them.
> > The
> > > > >
> > > >
> > > > I assume the silicate network has many bonds over large space - these
> > > > adjacent bonds are the issue, not uranyl. (You would have the same
> > > problem
> > > > with a clay-only system.)
> > > >
> > > >
> > > > > system has TIP4P waters, some OH bonds within the clay and the
> bonds
> > of
> > > > the
> > > > > uranyl hydrated ion described in my previous email as constraints.
> > The
> > > > > system is orthorrhombic 4.67070x4.49090x3.77930 and has 9046 atoms.
> > > > >
> > > > > This is the ouput of gromacs:
> > > > >
> > > > > Initializing Domain Decomposition on 100 ranks
> > > > > Dynamic load balancing: locked
> > > > > Initial maximum inter charge-group distances:
> > > > >    two-body bonded interactions: 0.470 nm, Tab. Bonds NC, atoms 10
> 13
> > > > > Minimum cell size due to bonded interactions: 0.000 nm
> > > > > Maximum distance for 5 constraints, at 120 deg. angles, all-trans:
> > > 0.842
> > > > nm
> > > > > Estimated maximum distance required for P-LINCS: 0.842 nm
> > > > > This distance will limit the DD cell size, you can override this
> with
> > > > -rcon
> > > > > Guess for relative PME load: 0.04
> > > > > Will use 90 particle-particle and 10 PME only ranks
> > > > >
> > > >
> > > > GROMACS has guessed to use 90 ranks in the real-space domain
> > > decomposition,
> > > > e.g. as an array of 6x5x3 ranks.
> > > >
> > > >
> > > > > This is a guess, check the performance at the end of the log file
> > > > > Using 10 separate PME ranks, as guessed by mdrun
> > > > > Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> > > > > Optimizing the DD grid for 90 cells with a minimum initial size of
> > > 1.052
> > > > nm
> > > > > The maximum allowed number of cells is: X 4 Y 4 Z 3
> > > > >
> > > >
> > > > ... but only 4x4x3=48 ranks can work with the connectivity of your
> > input.
> > > > Thus you are simply using too many ranks for a small system. You'd
> have
> > > to
> > > > relax the tolerances quite a lot to get to use 90 ranks. Just follow
> > the
> > > > first part of the message advice and use fewer ranks :-)
> > > >
> > > > Mark
> > > >
> > > > -------------------------------------------------------
> > > > > Program:     mdrun_mpi, version 2018.1
> > > > > Source file: src/gromacs/domdec/domdec.cpp (line 6571)
> > > > > MPI rank:    0 (out of 100)
> > > > >
> > > > > Fatal error:
> > > > > There is no domain decomposition for 90 ranks that is compatible
> with
> > > the
> > > > > given box and a minimum cell size of 1.05193 nm
> > > > > Change the number of ranks or mdrun option -rcon or -dds or your
> > LINCS
> > > > > settings
> > > > > Look in the log file for details on the domain decomposition
> > > > >
> > > > > For more information and tips for troubleshooting, please check the
> > > > GROMACS
> > > > > website at http://www.gromacs.org/Documentation/Errors
> > > > > -------------------------------------------------------
> > > > >
> > > > >
> > > > > Thank you for your help!
> > > > >
> > > > > On Wed, Nov 14, 2018 at 5:28 AM Mark Abraham <
> > mark.j.abraham at gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > Possibly. It would be simpler to use fewer processors, such that
> > the
> > > > > > domains can be larger.
> > > > > >
> > > > > > What does mdrun think it needs for -rcon?
> > > > > >
> > > > > > Mark
> > > > > >
> > > > > > On Tue, Nov 13, 2018 at 7:06 AM Sergio Perez <
> > sperezconesa at gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Dear gmx comunity,
> > > > > > >
> > > > > > > I have been running my system without any problems with 100
> > > > processors.
> > > > > > But
> > > > > > > I decided to make some of the bonds of my main molecule
> > constrains.
> > > > My
> > > > > > > molecule is not an extended chain, it is a molecular hydrated
> > ion,
> > > in
> > > > > > > particular the uranyl cation with 5 water molecules forming a
> > > > > pentagonal
> > > > > > by
> > > > > > > bipyramid. At this point I get a domain decomposition error
> and I
> > > > would
> > > > > > > like to reduce rcon in order to run with 100 processors. Since
> I
> > > know
> > > > > > that
> > > > > > > by the shape of my molecule, two atoms connected by several
> > > > constraints
> > > > > > > will never be further appart than 0.6nm, can I use this safely
> > for
> > > > > -rcon?
> > > > > > >
> > > > > > > Thank you very much!
> > > > > > > Best regards,
> > > > > > > Sergio Pérez-Conesa
> > > > > > > --
> > > > > > > Gromacs Users mailing list
> > > > > > >
> > > > > > > * Please search the archive at
> > > > > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> > before
> > > > > > > posting!
> > > > > > >
> > > > > > > * Can't post? Read
> http://www.gromacs.org/Support/Mailing_Lists
> > > > > > >
> > > > > > > * For (un)subscribe requests visit
> > > > > > >
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > > > or
> > > > > > > send a mail to gmx-users-request at gromacs.org.
> > > > > > --
> > > > > > Gromacs Users mailing list
> > > > > >
> > > > > > * Please search the archive at
> > > > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> before
> > > > > > posting!
> > > > > >
> > > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > > >
> > > > > > * For (un)subscribe requests visit
> > > > > >
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > > or
> > > > > > send a mail to gmx-users-request at gromacs.org.
> > > > > --
> > > > > Gromacs Users mailing list
> > > > >
> > > > > * Please search the archive at
> > > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > > posting!
> > > > >
> > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > >
> > > > > * For (un)subscribe requests visit
> > > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > or
> > > > > send a mail to gmx-users-request at gromacs.org.
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-request at gromacs.org.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list