[gmx-users] GPU Command
Hollingsworth, Bobby
louishollingsworth at g.harvard.edu
Wed Apr 18 22:18:35 CEST 2018
I would try to tune your launch with a ~100 ps run, testing several
different options for optimal performance (it can really save some time).
Performance will be very dependent on your CPUs in the setup that you
mentioned, so I would recommend installing GROMACS 2018 to offload PME to
the GPU, otherwise you face a potential bottleneck.
mpirun -np 4 gmx_mpi mdrun -ntomp 4 -pme gpu -npme 1 -nb gpu -gputasks
0011 -deffnm -nsteps 50000 -resetstep 25000
This will launch 4 ranks, 1 of which is a PME rank on GPU 1. A variation of
this launch configuration gets me ~3X performance compared to PME on the
CPU.
Others to consider:
mpirun -np 4 gmx_mpi mdrun -ntomp 4 -pme cpu -nb gpu -gputasks 0011
-deffnm -nsteps 50000 -resetstep 25000
mpirun -np 3 gmx_mpi mdrun -ntomp 5 -pme gpu -npme 1 -nb gpu -gputasks 011
-deffnm -nsteps 50000 -resetstep 25000
mpirun -np 2 gmx_mpi mdrun -ntomp 8 -pme cpu -nb gpu -gputasks 01 -deffnm
-nsteps 50000 -resetstep 25000
mpirun -np 2 gmx_mpi mdrun -ntomp 8 -pme gpu -npme 1 -nb gpu -gputasks 01
-deffnm -nsteps 50000 -resetstep 25000
These short runs should take about 2 minutes for a 30K atom system. Reduce
"nsteps" if necessary, and compare all of the run times (ns/day).
On Wed, Apr 18, 2018 at 2:51 AM, <zaved at tezu.ernet.in> wrote:
> On Thu, Apr 12, 2018 at 4:47 PM <zaved at tezu.ernet.in> wrote:
> >
> >> Dear Gromacs Users
> >>
> >> We have a GPU Server (Intel(R) Xeon(R) CPU E5-2609 v4 @ 1.70GHz, 16
> >> cores)
> >> with 2 NVIDIA Tesla P100 (12GB) cards.
> >>
> >> What should be my final mdrun command so that it should utilize both the
> >> cards for the run? (As of now it detects both the cards, but auto
> >> selects
> >> only 1)
> >>
> >> As of now I am using the following command:
> >>
> >> gmx_mpi mdrun -v -deffnm run -gpu_id 0 1
> >>
> >
> > You mentioned auto selection of GPU usage, but you are selecting the GPU
> > usage here, and you are requiring it to use only GPU 0. If you would look
> > at the examples in the user guide you would see how to use -gpu_id. If
> you
> > want permit mdrun to use its auto-selection, then don't use -gpu_id.
> >
> Thank you Mark for your kind response. However if I use the command
> without the gpu id (gmx_mpi mdrun -v -deffnm md), it still selects only 1
> card, and the log file throws a message as follows:
>
> Using 1 MPI process
> Using 16 OpenMP threads
>
> 2 compatible GPUs are present, with IDs 0,1
> 1 GPU auto-selected for this run.
> Mapping of GPU ID to the 1 PP rank in this node: 0
>
>
> NOTE: potentially sub-optimal launch configuration, gmx mdrun started with
> less PP MPI process per node than GPUs available.
> Each PP MPI process can use only one GPU, 1 GPU per node will be used.
>
> Will do PME sum in reciprocal space for electrostatic interactions.
>
> Kindly help.
>
> Thank You
>
> Regards
> Zaved Hazarika
> PhD Scholar
> Dept.of Molecular Biology and Biotechnology
> Tezpur University
> India
>
>
>
> >
> >> I am using Gromacs 2016.4 version.
> >>
> >> For gromacs installation, used the following:
> >>
> >> CC=/usr/bin/mpicc F77=/usr/bin/f77 CXX=/usr/bin/mpicxx
> >> MPICC=/usr/bin/mpicc CMAKE_PREFIX_PATH=/soft/fftw337/lib cmake ..
> >> -DFFTWF_INCLUDE_DIR=/soft/fftw337/include
> >> -DFFTWF_LIBRARIES=/soft/fftw337/lib/libfftw3.so
> >> -DCMAKE_INSTALL_PREFIX=/soft/gmx164 -DGMX_X11=OFF
> >> -DCMAKE_CXX_COMPILER=/usr/bin/mpicxx -DCMAKE_C_COMPILER=/usr/bin/mpicc
> >> -DGMX_MPI=ON -DGMX_DOUBLE=OFF -DGMX_DEFAULT_SUFFIX=ON
> >> -DGMX_PREFER_STATIC_LIBS=ON -DGMX_SIMD=SSE2 -DGMX_SIMD=AVX2_256
> >> -DGMX_USE_RDTSCP=OFF -DGMX_GPU=ON
> >> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
> >>
> >
> > Use only one GMX_SIMD setting - the one that matches the capabilities of
> > the CPU.
> >
> >
> >> Do I need to provide any other option (openMPI) while installing
> >> groamcs?
> >>
> >
> > You already chose to use your MPI library.
> >
> > Mark
>
>
>
> * * * D I S C L A I M E R * * *
> This e-mail may contain privileged information and is intended solely for
> the individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail in error and destroy
> it from your system. Though considerable effort has been made to deliver
> error free e-mail messages but it can not be guaranteed to be secure or
> error-free as information could be intercepted, corrupted, lost, destroyed,
> delayed, or may contain viruses. The recipient must verify the integrity of
> this e-mail message.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
--
Louis "Bobby" Hollingsworth
Ph.D. Student, Biological and Biomedical Sciences, Harvard University
B.S. Chemical Engineering, B.S. Biochemistry, B.A. Chemistry, Virginia Tech
Honors College '17
<http://www.linkedin.com/pub/louis-hollingsworth/77/aaa/a47>
More information about the gromacs.org_gmx-users
mailing list