[gmx-users] Using CPU with GPU

Mark Abraham mark.j.abraham at gmail.com
Wed Feb 15 03:14:20 CET 2017


Hi,

Unfortunately, your mail is pretty garbled, but in 5.0.7 you needed to
specify gmx_mpi mdrun -gpu_id with an id for every rank on a node that was
doing short-ranged PP work. In your case, that's probably

gmx_mpi mdrun -nb gpu -deffnm bact -gpu_id 00000000001111111111

Or you can update to more recent versions that don't need that detail, run
faster and have fewer bugs ;-)

Mark

On Sat, Feb 11, 2017 at 3:08 AM RJ <rajiv at kaist.ac.kr> wrote:

> Dear gmx users,Could you suggest me a correct md launch configuration. I
> am using an University Cluster and would like use :  Nodes 3 (3 * 20) has
> 60 MPI processes. NVIDIA Tesla K20Xm: ( 2 GPUs ). Kindly suggest the
> optimal way to get fast md run. I tried as the following bash script but
> ends with
> errors:------------------------------------------------------- #!/bin/bash #PBS
> -A super_star2 #PBS -q checkpt #PBS -l nodes=1:ppn=20:gpus=2 #PBS -l
> walltime=12:00:00 #PBS -V #PBS -j oe #PBS -N lyzo  # mdrun_d is the MPI
> version of mdrun.   export EXEC=mdrun_mpi #export
> WORKDIR=$PBS_O_WORKDIR export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print
> $1}'` export DIR=/work/rj/work_file cd $DIR mpirun -machinefile
> $PBS_NODEFILE -np $NPROCS `which $EXEC` -nb gpu -deffnm
> bact----------------------------------------------------------------------------------------------------------------------------PBS
> has allocated the following nodes:qb136qb140qb182A total of 60 processors
> on 3 nodes
> allocated--------------------------------------------------------------------------------------------------------Number
> of hardware threads detected (20) does not match the number reported by
> OpenMP (1).Consider setting the launch configuration manually!Reading file
> bact.tpr, VERSION 5.0.7 (single precision)Changing nstlist from 10 to 40,
> rlist from 1.4 to 1.472Using 60 MPI processesUsing 1 OpenMP thread per MPI
> process2 GPUs detected on host qb136:  #0: NVIDIA Tesla K20Xm, compute
> cap.: 3.5, ECC: yes, stat: compatible  #1: NVIDIA Tesla K20Xm, compute
> cap.: 3.5, ECC: yes, stat: compatible2 GPUs auto-selected for this
> run.Mapping of GPUs to the 20 PP ranks in this node: #0,
> #1----------------------------------------------------------------------------------------------------------------------------------------------------------------Program
> mdrun_mpi, VERSION 5.0.7Source code file:
> /project/fchen14/gromacs-5.0.7/src/gromacs/gmxlib/gmx_detect_hardware.c,
> line: 388Fatal error:Incorrect launch configuration: mismatching number of
> PP MPI processes and GPUs per node.mdrun_mpi was started with 20 PP MPI
> processes per node, but only 2 GPUs were detected.For more information and
> tips for troubleshooting, please check the GROMACSwebsite at
> http://www.gromacs.org/Documentation/Errors-------------------------------------------------------
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list