[gmx-users] Problem with the mdrun_openmpi on cluster

Szilárd Páll pall.szilard at gmail.com
Mon Mar 14 18:22:21 CET 2016


On Mon, Mar 14, 2016 at 5:26 PM, James Starlight <jmsstarlight at gmail.com>
wrote:

> the error is likely when I try to run not very big system on large
> number of CPUs in parallel
>
>
> my system is receptor embedded within the membrane consisted of 120 lipids
> that was produced by grompp
>
> Initializing Domain Decomposition on 64 nodes
> Dynamic load balancing: no
> Will sort the charge groups at every domain (re)decomposition
> Initial maximum inter charge-group distances:
>     two-body bonded interactions: 1.174 nm, Bond, atoms 3610 3611
>   multi-body bonded interactions: 1.174 nm, Improper Dih., atoms 3604 3610
> Minimum cell size due to bonded interactions: 1.200 nm
> Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 2.626 nm
> Estimated maximum distance required for P-LINCS: 2.626 nm
> This distance will limit the DD cell size, you can override this with -rcon
> Guess for relative PME load: 0.87
> Using 0 separate PME nodes, as guessed by mdrun
> Optimizing the DD grid for 64 cells with a minimum initial size of 2.626 nm
> The maximum allowed number of cells is: X 3 Y 3 Z 3
>

The above line clearly indicates what the issue is. If you are limited to
3*3*3=27 domains by the settings you use, trying to get 64 domains is not
going to work, unless...

Quite clearly, it's the estimated LINCS distance requirement is what puts a
strong limit on the DD cell size. You need to evaluate (or check the
literature) whether the "5 constraints, at 120 deg. angles, all-trans:
2.626 nm" estimation is reasonable for your case. Depending on the box size
you may be able to relax this assumption and get e.g. a max 4x4x4
decomposition which would allow you to use exactly 64 ranks (although I
doubt it will be efficient).

However, less ancient releases than the GROMACS 4.5 you seem to be using
support hybrid MPI+OpenMP parallelization which use quite useful exactly in
such situations.

So I'd suggest considering options in the following order:
- use more recent GROMACS (and appropriate MARTINI settings)
- use MPI+OpenMP
- tweak rcon



>
>
> are there sollutions for this kind of system probably altering some
> cutoffs etc/
>
> 2016-03-14 17:22 GMT+01:00 Smith, Micholas D. <smithmd at ornl.gov>:
> > What is your box size (x, y, z)?
> >
> > What happens if you use half that number of nodes?
> >
> > ===================
> > Micholas Dean Smith, PhD.
> > Post-doctoral Research Associate
> > University of Tennessee/Oak Ridge National Laboratory
> > Center for Molecular Biophysics
> >
> > ________________________________________
> > From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <
> gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of James
> Starlight <jmsstarlight at gmail.com>
> > Sent: Monday, March 14, 2016 12:19 PM
> > To: Discussion list for GROMACS users
> > Subject: [gmx-users] Problem with the mdrun_openmpi on cluster
> >
> > Hello,
> >
> > I am trying to submit job on 64 nodes on mu local cluster using below
> > combination of software
> >
> >
> > DO="mpiexec -np 64"
> > PROG="g_mdrun_openmpi"
> >
> >
> > $DO $PROG -deffnm sim
> >
> >
> > obtaining error
> >
> >
> >
> > Program g_mdrun_openmpi, VERSION 4.5.7
> > Source code file:
> > /builddir/build/BUILD/gromacs-4.5.7/src/mdlib/domdec.c, line: 6436
> >
> > Fatal error:
> > There is no domain decomposition for 64 nodes that is compatible with
> > the given box and a minimum cell size of 2.6255 nm
> > Change the number of nodes or mdrun option -rcon or your LINCS settings
> >
> > Could someone provide me trivial sollution/
> >
> > Thanks!
> >
> > J.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list