[gmx-users] problem using more than 1 gpu on a single node - Not all bonded interactions have been properly assigned to the domain decomposition cells

Carlos Navarro carlos.navarro87 at gmail.com
Tue Jul 2 15:57:42 CEST 2019


Dear Mark,
Thanks for the reply.
I built the system using charmm-gui, and looking onto the *itp files of the
protein, lipids, ions and water molecules (I forgot to mention that I’m
simulating a protein channel embedded in a POPC membrane)  I don’t see that
line you mentioned ([intermolecular_interactions]).

Best regards,
Carlos


——————
Carlos Navarro Retamal
Bioinformatic Engineering. PhD.
Postdoctoral Researcher in Center of Bioinformatics and Molecular
Simulations
Universidad de Talca
Av. Lircay S/N, Talca, Chile
E: carlos.navarro87 at gmail.com or cnavarro at utalca.cl

On July 2, 2019 at 3:22:24 PM, Mark Abraham (mark.j.abraham at gmail.com)
wrote:

Hi,

If you were using the [intermolecular_interactions] topology file section,
there's a known bug that might have produced these symptoms. It's fixed in
2019.3, so if you think that might apply to you, please update the
installation of GROMACS and let us know how you go!

Mark

On Tue, 2 Jul 2019 at 15:01, Carlos Navarro <carlos.navarro87 at gmail.com>
wrote:

> Dear gmx-users,
> This is my first time running gromacs in a server (I mainly work on
> workstation) and I'm having some problems using efficiently more than a
gpu
> per job. This is my script:
>
> #!/bin/bash -x
> #SBATCH --job-name=gro16AtTPC1
> #SBATCH --nodes=1
> #SBATCH --ntasks-per-node=10
> #SBATCH --cpus-per-task=4
> #SBATCH --output=4gpu.%j
> #SBATCH --error=4gpuerr.%j
> #SBATCH --time=00:02:00
> #SBATCH --gres=gpu:4
>
> module load Intel/2019.3.199-GCC-8.3.0
> module load ParaStationMPI/5.2.2-1
> module load IntelMPI/2019.3.199
> module load GROMACS/2019.1
> export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
>
> ###############################o#
> # --- DEFINE YOUR VARIABLES --- #
> #################################
> #
> #
>
> WORKDIR1=/p/project/chdd22/gromacs/benchmark/AtTPC1
> cd $WORKDIR1
> srun --gres=gpu:4 gmx mdrun -s md.tpr -deffnm test16-4gpu -resethway -dlb
> auto -ntmpi 4 -pin on -pinoffset 0 &
>
> wait
>
> #
> # --- Exit this script
> #
> exit
>
> and I'm getting the following error message:
> Not all bonded interactions have been properly assigned to the domain
> decomposition cells
> A list of missing interactions:
> Bond of 26920 missing 146
> U-B of 118884 missing 877
> Proper Dih. of 192452 missing 2623
> Improper Dih. of 3822 missing 6
> LJ-14 of 167422 missing 1572
>
> From my understanding, gromacs is not able to distribute properly each
> video card on each domain. Is there a way to solve this?
> Some additional information:
> System: ~200k atoms
> node: 40 cores + 40 threads
> gpu-per-node= 4 nvidia Tesla V100
>
> if you need more info just let me know.
> Best regards,
> Carlos
> --
>
> ----------
>
> Carlos Navarro Retamal
>
> Bioinformatic Engineering. PhD
>
> Postdoctoral Researcher in Center for Bioinformatics and Molecular
> Simulations
>
> Universidad de Talca
>
> Av. Lircay S/N, Talca, Chile
>
> T: (+56) 712201 <//T:%20(+56)%20712201> 798
>
> E: carlos.navarro87 at gmail.com or cnavarro at utalca.cl
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list