[gmx-users] nvidia tesla p100

Mark Abraham mark.j.abraham at gmail.com
Mon Oct 31 21:41:57 CET 2016


Hi,

On Sun, Oct 30, 2016 at 9:55 PM Irem Altan <irem.altan at duke.edu> wrote:

> Hi,
>
> Thank you. It turns out that I hadn’t requested the correct number of GPUs
> in the submission script, so it now sees the GPUs. There are more problems,
> however. I’m using 5.1.2, because 2016 doesn’t seem to have been properly
> setup on the cluster that I’m using (Bridges-Pittsburgh). I’m having
> trouble figuring out the optimum number of threads and such for the nodes
> in this cluster. The nodes have 2 nVidia Tesla P100 GPUs, and 2 Intel Xeon
> CPUs with 16 cores each.


So that would be 32 total cores, which with hyperthreading might be 64
threads?


> Therefore I request 2 tasks per node, and use the following command to run
> mdrun:
>
> mpirun -np $SLURM_NPROCS gmx_mpi mdrun -ntomp 2 -v -deffnm npt
>
> where $SLURM_NPROCS gets set to 32 automatically


That'll get you 2 OpenMP threads per MPI rank, so 2*32=64 total threads,
and you'd probably prefer that you get your two threads on the same
physical core.


> (this is what fails with version 2016, apparently).
>
> This results in the following messages in the output:
>
> Number of logical cores detected (32) does not match the number reported
> by OpenMP (1).
> Consider setting the launch configuration manually!
>

Ignore this, it reports something meaningful, but that thing is not what it
says. It's been removed in 2016 until someone works out a good way to say
something useful. But it probably means you're getting the layout I
suggested you should prefer.


> Running on 1 node with total 32 logical cores, 2 compatible GPUs
> Hardware detected on host gpu047.pvt.bridges.psc.edu (the node of MPI
> rank 0):
>   CPU info:
>     Vendor: GenuineIntel
>     Brand:  Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
>     SIMD instructions most likely to fit this hardware: AVX2_256
>     SIMD instructions selected at GROMACS compile time: AVX2_256
>   GPU info:
>     Number of GPUs detected: 2
>     #0: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat:
> compatible
>     #1: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat:
> compatible
>
> Reading file npt.tpr, VERSION 5.1.2 (single precision)
> Changing nstlist from 20 to 40, rlist from 1.017 to 1.073
>
> Using 2 MPI processes
> Using 2 OpenMP threads per MPI process
>

This can't be from mpirun -np $SLURM_NPROCS gmx_mpi mdrun with the value
32. (Unless you're actually running a multi-simulation and we don't know
about that).

On host gpu047.pvt.bridges.psc.edu 2 compatible GPUs are present, with IDs
> 0,1
> On host gpu047.pvt.bridges.psc.edu 2 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 2 PP ranks in this node: 0,1
>
> I’m concerned with the first message. Does this mean that I cannot fully
> utilize the 32 cores? The resulting simulation speed is comparable to my
> previous system with a single K80 GPU and 6 cores. Am I doing something
> wrong, or have the system administrators compiled/set-up Gromacs
> incorrectly?
>

Your report seems inconsistent, so we can't say yet.

Mark


> Best,
> Irem
>
> > On Oct 29, 2016, at 7:20 PM, Mark Abraham <mark.j.abraham at gmail.com>
> wrote:
> >
> > Hi,
> >
> > Sure, any CUDA build of GROMACS will run on such a card, but you want
> > 2016.1 for best performance. Your problem is likely that you haven't got
> a
> > suitably new driver installed. What does nvidia-smi report?
> >
> > Mark
> >
> > On Sun, Oct 30, 2016 at 1:13 AM Irem Altan <irem.altan at duke.edu> wrote:
> >
> >> Hi,
> >>
> >> I was wondering, does Gromacs support nVidia Tesla P100 cards? I’m
> trying
> >> to run a simulation on a node with this GPU, but whatever I tried, I
> can’t
> >> get Gromacs to detect a cuda-capable card:
> >>
> >> NOTE: Error occurred during GPU detection:
> >>      no CUDA-capable device is detected
> >>      Can not use GPU acceleration, will fall back to CPU kernels.
> >>
> >> Is it even supported?
> >>
> >> Best,
> >> Irem
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists_GMX-2DUsers-5FList&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=kL_5IPwUj2qYwzF8NMs3kdFyk3V4ilYMF4zb2qeYfQs&e=
> before
> >> posting!
> >>
> >> * Can't post? Read
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=Uwy0aXJ0_3T2hPb32VuUWE7wKVw4PrlAmZJH4DzIimc&e=
> >>
> >> * For (un)subscribe requests visit
> >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__maillist.sys.kth.se_mailman_listinfo_gromacs.org-5Fgmx-2Dusers&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=ky74TNYxnThRGXSpVRqs65QQ1eaUEC4e_e8yHbJf730&e=
> or
> >> send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists_GMX-2DUsers-5FList&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=kL_5IPwUj2qYwzF8NMs3kdFyk3V4ilYMF4zb2qeYfQs&e=
> before posting!
> >
> > * Can't post? Read
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=Uwy0aXJ0_3T2hPb32VuUWE7wKVw4PrlAmZJH4DzIimc&e=
> >
> > * For (un)subscribe requests visit
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__maillist.sys.kth.se_mailman_listinfo_gromacs.org-5Fgmx-2Dusers&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=ky74TNYxnThRGXSpVRqs65QQ1eaUEC4e_e8yHbJf730&e=
> or send a mail to gmx-users-request at gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list