[gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

Paul bauer paul.bauer.q at gmail.com
Fri Dec 13 08:22:17 CET 2019


Hello,

the error you are getting in the end means that your simulation likely 
does not use PME, or uses it in a way that is not implemented to run on 
the GPU.
You can still run the nonbonded calculations on the GPU, just remove the 
-pme gpu flag.

For running different simulations on your GPUs, you need to set the 
environment variable CUDA_VISIBLE_DEVICES so that each simulation only 
sees on of the available GPUs.

Cheers

Paul

On 13/12/2019 06:22, Pragati Sharma wrote:
> Hello all,
>
> I am running a polymer melt with 100000 atoms, 2 fs time step, PME, on a
> workstation with specifications:
>
> 2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU
> 2X16B DDR4 RAM
> 2XRTX 2080Ti 11 GB
>
> I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version using:
>
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
> *-DGMX_THREAD_MPI=ON
> -DGMX_GPU=ON*
>
> While running a single job with below command, I am getting a performance
> of *65 ns/day. *
>
> *gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -gpu_id 0 -ntmpi 1 -ntomp 24*
>
> *Q. However I want to run two different simulations at a time using CPU
> cores and one GPU for each, Can somebody help me with mdrun command (what
> combination of ntmpi and ntomp) I should use to run two simulations with
> efficient utilization of CPU cores and 1 GPU each.*
>
> *Q.* I have also tried utilising GPU for PME calculations using -pme GPU,
> as in the command
>
> gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -ntmpi 1 -ntomp 24  -gputasks 01* -nb
> gpu -pme gpu*
>
> but i get the below error,
>
>
> *"Feature not implemented:The input simulation did not use PME in a way
> that is supported on the GPU."*
>
> why is this error coming? Should I put extra attributes while compiling
> gromacs.
>
> Thanks


-- 
Paul Bauer, PhD
GROMACS Release Manager
KTH Stockholm, SciLifeLab
0046737308594



More information about the gromacs.org_gmx-users mailing list