[gmx-users] best performance on GPU

Maryam maryam.kowsar at gmail.com
Fri Aug 2 00:04:28 CEST 2019


Dear all
I want to run a simulation of approximately 12000 atoms system in gromacs
2016.6 on GPU with the following machine structure:
Precision: single Memory model: 64 bit MPI library: thread_mpi OpenMP
support: enabled (GMX_OPENMP_MAX_THREADS = 32) GPU support: CUDA SIMD
instructions: AVX2_256 FFT library:
fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG
support: enabled Hwloc support: disabled Tracing support: disabled Built
on: Fri Jun 21 09:58:11 EDT 2019 Built by: julian at BioServer [CMAKE] Build
OS/arch: Linux 4.15.0-52-generic x86_64 Build CPU vendor: AMD Build CPU
brand: AMD Ryzen 7 1800X Eight-Core Processor Build CPU family: 23 Model: 1
Stepping: 1
Number of GPUs detected: 1 #0: NVIDIA GeForce RTX 2080 Ti, compute cap.:
7.5, ECC: no, stat: compatible
i used different commands to get the best performance and i dont know which
point i am missing. the quickest time possible is got by this command:gmx
mdrun -s md.tpr -nb gpu -deffnm MD -tunepme -v
which is 10 ns/day! and it takes 2 months to end.
 though i used several commands to tune it like: gmx mdrun -ntomp 6 -pin on
-resethway -nstlist 20 -s md.tpr -deffnm md -cpi md.cpt -tunepme -cpt 15
-append -gpu_id 0 -nb auto.  In the gromacs website it is mentioned that
with this properties I should be able to run it in  295 ns/day!
could you help me find out what point i am missing that i can not reach the
best performance level?
Thank you
------------------------------


More information about the gromacs.org_gmx-users mailing list