[gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

Dave M dave.gromax at gmail.com
Fri Dec 13 08:44:24 CET 2019


Hi Paul,

I just jumped in this discussion. But I am wondering is
CUDA_VISIBLE_DEVICES  equivalent to providing gpu_id in mdrun?
Also, my multiple simulations run slower in the same node with multiple
gpus. e.g. in a node with 4 GPU and 64 CPU
mpirun -np 1 mdrun -ntomp 24 -gpu_id 0 -pin on
mpirun -np 1 mdrun -ntomp 24 -gpu_id 2 -pin on

If I run just one simulation I get almost double performance (with same
command as above).

Dave

On Thu, Dec 12, 2019 at 11:22 PM Paul bauer <paul.bauer.q at gmail.com> wrote:

> Hello,
>
> the error you are getting in the end means that your simulation likely
> does not use PME, or uses it in a way that is not implemented to run on
> the GPU.
> You can still run the nonbonded calculations on the GPU, just remove the
> -pme gpu flag.
>
> For running different simulations on your GPUs, you need to set the
> environment variable CUDA_VISIBLE_DEVICES so that each simulation only
> sees on of the available GPUs.
>
> Cheers
>
> Paul
>
> On 13/12/2019 06:22, Pragati Sharma wrote:
> > Hello all,
> >
> > I am running a polymer melt with 100000 atoms, 2 fs time step, PME, on a
> > workstation with specifications:
> >
> > 2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU
> > 2X16B DDR4 RAM
> > 2XRTX 2080Ti 11 GB
> >
> > I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version
> using:
> >
> > cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
> > *-DGMX_THREAD_MPI=ON
> > -DGMX_GPU=ON*
> >
> > While running a single job with below command, I am getting a performance
> > of *65 ns/day. *
> >
> > *gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -gpu_id 0 -ntmpi 1 -ntomp 24*
> >
> > *Q. However I want to run two different simulations at a time using CPU
> > cores and one GPU for each, Can somebody help me with mdrun command (what
> > combination of ntmpi and ntomp) I should use to run two simulations with
> > efficient utilization of CPU cores and 1 GPU each.*
> >
> > *Q.* I have also tried utilising GPU for PME calculations using -pme GPU,
> > as in the command
> >
> > gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -ntmpi 1 -ntomp 24  -gputasks 01*
> -nb
> > gpu -pme gpu*
> >
> > but i get the below error,
> >
> >
> > *"Feature not implemented:The input simulation did not use PME in a way
> > that is supported on the GPU."*
> >
> > why is this error coming? Should I put extra attributes while compiling
> > gromacs.
> >
> > Thanks
>
>
> --
> Paul Bauer, PhD
> GROMACS Release Manager
> KTH Stockholm, SciLifeLab
> 0046737308594
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list