[gmx-users] GPU Command

zaved at tezu.ernet.in zaved at tezu.ernet.in
Wed Apr 18 09:47:23 CEST 2018


 On Thu, Apr 12, 2018 at 4:47 PM <zaved at tezu.ernet.in> wrote:
>
>> Dear Gromacs Users
>>
>> We have a GPU Server (Intel(R) Xeon(R) CPU E5-2609 v4 @ 1.70GHz, 16
>> cores)
>> with 2 NVIDIA Tesla P100 (12GB) cards.
>>
>> What should be my final mdrun command so that it should utilize both the
>> cards for the run? (As of now it detects both the cards, but auto
>> selects
>> only 1)
>>
>> As of now I am using the following command:
>>
>> gmx_mpi mdrun -v -deffnm run -gpu_id 0 1
>>
>
> You mentioned auto selection of GPU usage, but you are selecting the GPU
> usage here, and you are requiring it to use only GPU 0. If you would look
> at the examples in the user guide you would see how to use -gpu_id. If you
> want permit mdrun to use its auto-selection, then don't use -gpu_id.
>
Thank you Mark for your kind response. However if I use the command
without the gpu id (gmx_mpi mdrun -v -deffnm md), it still selects only 1
card, and the log file throws a message as follows:

Using 1 MPI process
Using 16 OpenMP threads

2 compatible GPUs are present, with IDs 0,1
1 GPU auto-selected for this run.
Mapping of GPU ID to the 1 PP rank in this node: 0


NOTE: potentially sub-optimal launch configuration, gmx mdrun started with
less PP MPI process per node than GPUs available.
Each PP MPI process can use only one GPU, 1 GPU per node will be used.

Will do PME sum in reciprocal space for electrostatic interactions.

Kindly help.

Thank You

Regards
Zaved Hazarika
PhD Scholar
Dept.of Molecular Biology and Biotechnology
Tezpur University
India



>
>> I am using Gromacs 2016.4 version.
>>
>> For gromacs installation, used the following:
>>
>> CC=/usr/bin/mpicc F77=/usr/bin/f77 CXX=/usr/bin/mpicxx
>> MPICC=/usr/bin/mpicc CMAKE_PREFIX_PATH=/soft/fftw337/lib cmake ..
>> -DFFTWF_INCLUDE_DIR=/soft/fftw337/include
>> -DFFTWF_LIBRARIES=/soft/fftw337/lib/libfftw3.so
>> -DCMAKE_INSTALL_PREFIX=/soft/gmx164 -DGMX_X11=OFF
>> -DCMAKE_CXX_COMPILER=/usr/bin/mpicxx -DCMAKE_C_COMPILER=/usr/bin/mpicc
>> -DGMX_MPI=ON -DGMX_DOUBLE=OFF -DGMX_DEFAULT_SUFFIX=ON
>> -DGMX_PREFER_STATIC_LIBS=ON -DGMX_SIMD=SSE2 -DGMX_SIMD=AVX2_256
>> -DGMX_USE_RDTSCP=OFF -DGMX_GPU=ON
>> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
>>
>
> Use only one GMX_SIMD setting - the one that matches the capabilities of
> the CPU.
>
>
>> Do I need to provide any other option (openMPI) while installing
>> groamcs?
>>
>
> You already chose to use your MPI library.
>
> Mark



* * * D I S C L A I M E R * * *
This e-mail may contain privileged information and is intended solely for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail in error and destroy it from your system. Though considerable effort has been made to deliver error free e-mail messages but it can not be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, delayed, or may contain viruses. The recipient must verify the integrity of this e-mail message.


More information about the gromacs.org_gmx-users mailing list