[gmx-users] REMD-error

Mark Abraham mark.j.abraham at gmail.com
Wed Sep 4 11:12:13 CEST 2019


Hi,

On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bratin at nitk.edu.in>
wrote:

> Respected Mark Abraham,
>                                           The command-line and the job
> submission script is given below
>
> #!/bin/bash
> #SBATCH -n 130 # Number of cores
>

Per the docs, this is a guide to sbatch about how many (MPI) tasks you want
to run. It's not a core request.

#SBATCH -N 5   # no of nodes
>

This requires a certain number of nodes. So to implement both your
instructions, MPI has to start 26 tasks per node. That would make sense if
you had nodes with a multiple 26 cores. My guess is that your nodes have a
multiple of 16 cores, based on the error message. MPI saw that you asked to
allocate more tasks on cores than available cores, and decided not to set a
number of OpenMP threads per MPI task, so that fell back on a default,
which produced 16, which GROMACS can see doesn't make sense.

If you want to use -N and -n, then you need to make a choice that makes
sense for the number of cores per node. Easier might be to use -n 130 and
-c 2 to express what I assume is your intent to have 2 cores per MPI task.
Now slurm+MPI can pass that message along properly to OpenMP.

Your other message about -ntomp can only have come from running gmx_mpi_d
-ntmpi, so just a typo we don't need to worry about further.

Mark

#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
> #SBATCH -p cpu # Partition to submit to
> #SBATCH -o hostname_%j.out # File to which STDOUT will be written
> #SBATCH -e hostname_%j.err # File to which STDERR will be written
> #loading gromacs
> module load gromacs/2018.4
> #specifying work_dir
> WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1
>
>
> mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
> equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
> equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
> equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
> equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
> equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
> equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
> equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
> -deffnm remd_nvt -cpi remd_nvt.cpt -append
>
> On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham <mark.j.abraham at gmail.com>
> wrote:
>
> > Hi,
> >
> > We need to see your command line in order to have a chance of helping.
> >
> > Mark
> >
> > On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <
> 177cy500.bratin at nitk.edu.in
> > >
> > wrote:
> >
> > > Dear all,
> > >             I am running one REMD simulation with 65 replicas. I am
> using
> > > 130 cores for the simulation. I am getting the following error.
> > >
> > > Fatal error:
> > > Your choice of number of MPI ranks and amount of resources results in
> > using
> > > 16
> > > OpenMP threads per rank, which is most likely inefficient. The optimum
> is
> > > usually between 1 and 6 threads per rank. If you want to run with this
> > > setup,
> > > specify the -ntomp option. But we suggest to change the number of MPI
> > > ranks.
> > >
> > > when I am using -ntomp option ...it is throwing another error
> > >
> > > Fatal error:
> > > Setting the number of thread-MPI ranks is only supported with
> thread-MPI
> > > and
> > > GROMACS was compiled without thread-MPI
> > >
> > >
> > > while GROMACS is compiled with threated-MPI...
> > >
> > > plerase help me in this regard.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list