[gmx-users] GPU problem

Mark Abraham mark.j.abraham at gmail.com
Wed Mar 12 16:52:25 CET 2014


Hi,

As the message says, you need at least one MPI process doing PP per GPU, so
you need to arrange your MPI to have more than one process if you want to
use both GPUs. Depending on your simulation and hardware, you may do better
with any even number of processes, corresponding OpenMP setup, and mapping
them with mdrun -gpu_id 00...011...1. It is essential to consult the
performance table at the end of the log file to understand what is going on
under different conditions.

And please don't make new installs of 4.6.1 when there's been four
subsequent bug-fix releases!

Mark


On Wed, Mar 12, 2014 at 2:49 PM, pratibha kapoor
<kapoorpratibha7 at gmail.com>wrote:

> Hi users,
>
> I would like to run my job on gpu. I have compiled gromacs version 4.6.1
> with cuda 5 in parallel, have changed cutoff scheme to verlet and generated
> new *.tpr and finally running:
> mdrun -v -deffnm md
> I can see following lines in my output:
>
> changing nstlist from 5 to 40, rlist from 1 to 1.084
> using 1 MPI process
> using 16 OpenMP threads
> 2 GPUs detected on host
> 1 GPU auto-selected for this run
> Mapping of GPU to the 1PP rank in this node:#0
>
> Note:potentially sub-optimal launch configuration, mdrun_mpi started with
> less PP MPI process per node than GPUs available. Each PP MPI process can
> use 1 GPU, 1GPU per node will be used.
>
> This means GPU is being utilised. But I can't see any performance
> enhancement (not even of 1ns) with this compared to case without GPU.
>
> Is there something wrong in the steps I followed?  Any help is highly
> appreciated.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list