[gmx-users] FEP calculations on multiple nodes

Mark Abraham mark.j.abraham at gmail.com
Thu Aug 24 17:55:35 CEST 2017


Hi,

Thanks. That should not be the problem, because all such computations are
only on the CPU... But hopefully we will see.

Mark

On Thu, 24 Aug 2017 17:35 Leandro Bortot <leandro.obt at gmail.com> wrote:

> Hello all,
>
>      This may add something: I had Segmentation Fault using flat-bottom
> restraints with GPUs before. I just assumed that this type of restraint was
> not supported by GPUs and moved to a CPU-only system.
>      Sadly it was some time ago and I don't have the files anymore.
>
> Best,
> Leandro
>
>
> On Thu, Aug 24, 2017 at 5:13 PM, Mark Abraham <mark.j.abraham at gmail.com>
> wrote:
>
> > Hi,
> >
> > Thanks. Good lesson here - try simplifying until things work. That does
> > suggest there is a bug in flat bottomed position restraints. Can you
> please
> > upload a tpr with those restraints, along with a report at
> > https://redmine.gromacs.org so we can reproduce and hopefully fix it?
> >
> > Mark
> >
> > On Thu, 24 Aug 2017 16:55 Vikas Dubey <vikasdubey055 at gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I have just checked with normal restraints. it works fine. Simulation
> > crash
> > > with flat bottom restraints.
> > >
> > > On 24 August 2017 at 16:43, Mark Abraham <mark.j.abraham at gmail.com>
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > Does it work if you just have the normal position restraints, or just
> > > have
> > > > the flat-bottom restraints? In particular, I could image the latter
> are
> > > not
> > > > widely used and might have a bug.
> > > >
> > > > Mark
> > > >
> > > > On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey <vikasdubey055 at gmail.com
> >
> > > > wrote:
> > > >
> > > > > Hi everyone,
> > > > >
> > > > > I have found out that positions restrains is the issue in my FEP
> > > > > simulation.  As soon as I switch off position restraints it works
> > > fine. I
> > > > > have the following the restraint file for the ions in my system (I
> > > don't
> > > > > see any problems with it):
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > *[ position_restraints ]; atom  type      fx      fy      fz    1
> >  1
> > > > 0
> > > > > 0  1000     2     1  0  0  1000     3     1  0  0  1000     4     1
> > > 0  0
> > > > > 1000    5     1  0  0  1000     6     1  0  0  1000     8     1
> 0  0
> > > > > 1000     9     1  0  0  1000    10     1  0  0  1000    11     1  0
> > 0
> > > > > 1000    12     1  0  0  1000    13     1  0  0  1000    14     1  0
> > 0
> > > > > 1000    15     1  0  0  1000    16     1  0  0  1000    17     1  0
> > 0
> > > > > 1000    18     1  0  0  1000    19     1  0  0  1000    20     1  0
> > 0
> > > > > 1000    21     1  1000  1000  1000;[ position_restraints ] ; flat
> > > bottom
> > > > > position restraints, here for potassium in site I;  type, g(8 for a
> > > > > cylinder), r(nm), k    7      2    8      1  1000*
> > > > >
> > > > >
> > > > > On 22 August 2017 at 14:18, Vikas Dubey <vikasdubey055 at gmail.com>
> > > wrote:
> > > > >
> > > > > > Hi, I use the following script for my cluster. Also, I think
> > problem
> > > is
> > > > > > calculation specific. I have run a quite a few normal
> simulations ,
> > > it
> > > > > > works fine :
> > > > > >
> > > > > >
> > > > > > #SBATCH --job-name=2_1_0
> > > > > > #SBATCH --mail-type=ALL
> > > > > > #SBATCH --time=24:00:00
> > > > > > #SBATCH --nodes=1
> > > > > > #SBATCH --ntasks-per-node=1
> > > > > > #SBATCH --ntasks-per-core=2
> > > > > > #SBATCH --cpus-per-task=4
> > > > > > #SBATCH --constraint=gpu
> > > > > > #SBATCH --output out.txt
> > > > > > #SBATCH --error  err.txt
> > > > > > #========================================
> > > > > > # load modules and run simulation
> > > > > > module load daint-gpu
> > > > > > module load GROMACS
> > > > > > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > > > > > export CRAY_CUDA_MPS=1
> > > > > >
> > > > > > srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
> > > > > > $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
> > > > > >
> > > > > > On 22 August 2017 at 06:11, Nikhil Maroli <scinikhil at gmail.com>
> > > wrote:
> > > > > >
> > > > > >> Okay, you might need to consider
> > > > > >>
> > > > > >> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm XXXX  -gpu_id XXX
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
> > > > > >> -performance.html
> > > > > >>
> > > > > >> http://www.gromacs.org/Documentation/Errors#There_is_no_
> > > > > >> domain_decomposition_for_n_nodes_that_is_compatible_with_the
> > > > > >> _given_box_and_a_minimum_cell_size_of_x_nm
> > > > > >> --
> > > > > >> Gromacs Users mailing list
> > > > > >>
> > > > > >> * Please search the archive at http://www.gromacs.org/Support
> > > > > >> /Mailing_Lists/GMX-Users_List before posting!
> > > > > >>
> > > > > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > > >>
> > > > > >> * For (un)subscribe requests visit
> > > > > >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_
> > gmx-users
> > > or
> > > > > >> send a mail to gmx-users-request at gromacs.org.
> > > > > >>
> > > > > >
> > > > > >
> > > > > --
> > > > > Gromacs Users mailing list
> > > > >
> > > > > * Please search the archive at
> > > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > > posting!
> > > > >
> > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > >
> > > > > * For (un)subscribe requests visit
> > > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > or
> > > > > send a mail to gmx-users-request at gromacs.org.
> > > > >
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/
> > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-request at gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list