[gmx-users] nvidia tesla p100

Irem Altan irem.altan at duke.edu
Mon Oct 31 17:59:20 CET 2016


Hi,

I should add that the problem described below still persist, even after I set OMP_NUM_THREADS=16.

Best,
Irem

> On Oct 30, 2016, at 4:54 PM, Irem Altan <irem.altan at duke.edu> wrote:
> 
> Hi,
> 
> Thank you. It turns out that I hadn’t requested the correct number of GPUs in the submission script, so it now sees the GPUs. There are more problems, however. I’m using 5.1.2, because 2016 doesn’t seem to have been properly setup on the cluster that I’m using (Bridges-Pittsburgh). I’m having trouble figuring out the optimum number of threads and such for the nodes in this cluster. The nodes have 2 nVidia Tesla P100 GPUs, and 2 Intel Xeon CPUs with 16 cores each. Therefore I request 2 tasks per node, and use the following command to run mdrun:
> 
> mpirun -np $SLURM_NPROCS gmx_mpi mdrun -ntomp 2 -v -deffnm npt
> 
> where $SLURM_NPROCS gets set to 32 automatically (this is what fails with version 2016, apparently).
> 
> This results in the following messages in the output:
> 
> Number of logical cores detected (32) does not match the number reported by OpenMP (1).
> Consider setting the launch configuration manually!
> 
> Running on 1 node with total 32 logical cores, 2 compatible GPUs
> Hardware detected on host gpu047.pvt.bridges.psc.edu (the node of MPI rank 0):
>  CPU info:
>    Vendor: GenuineIntel
>    Brand:  Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
>    SIMD instructions most likely to fit this hardware: AVX2_256
>    SIMD instructions selected at GROMACS compile time: AVX2_256
>  GPU info:
>    Number of GPUs detected: 2
>    #0: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat: compatible
>    #1: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat: compatible
> 
> Reading file npt.tpr, VERSION 5.1.2 (single precision)
> Changing nstlist from 20 to 40, rlist from 1.017 to 1.073
> 
> Using 2 MPI processes
> Using 2 OpenMP threads per MPI process
> 
> On host gpu047.pvt.bridges.psc.edu 2 compatible GPUs are present, with IDs 0,1
> On host gpu047.pvt.bridges.psc.edu 2 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 2 PP ranks in this node: 0,1
> 
> I’m concerned with the first message. Does this mean that I cannot fully utilize the 32 cores? The resulting simulation speed is comparable to my previous system with a single K80 GPU and 6 cores. Am I doing something wrong, or have the system administrators compiled/set-up Gromacs incorrectly?
> 
> Best,
> Irem
> 
>> On Oct 29, 2016, at 7:20 PM, Mark Abraham <mark.j.abraham at gmail.com> wrote:
>> 
>> Hi,
>> 
>> Sure, any CUDA build of GROMACS will run on such a card, but you want
>> 2016.1 for best performance. Your problem is likely that you haven't got a
>> suitably new driver installed. What does nvidia-smi report?
>> 
>> Mark
>> 
>> On Sun, Oct 30, 2016 at 1:13 AM Irem Altan <irem.altan at duke.edu> wrote:
>> 
>>> Hi,
>>> 
>>> I was wondering, does Gromacs support nVidia Tesla P100 cards? I’m trying
>>> to run a simulation on a node with this GPU, but whatever I tried, I can’t
>>> get Gromacs to detect a cuda-capable card:
>>> 
>>> NOTE: Error occurred during GPU detection:
>>>     no CUDA-capable device is detected
>>>     Can not use GPU acceleration, will fall back to CPU kernels.
>>> 
>>> Is it even supported?
>>> 
>>> Best,
>>> Irem
>>> --
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists_GMX-2DUsers-5FList&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=kL_5IPwUj2qYwzF8NMs3kdFyk3V4ilYMF4zb2qeYfQs&e=  before
>>> posting!
>>> 
>>> * Can't post? Read https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=Uwy0aXJ0_3T2hPb32VuUWE7wKVw4PrlAmZJH4DzIimc&e= 
>>> 
>>> * For (un)subscribe requests visit
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__maillist.sys.kth.se_mailman_listinfo_gromacs.org-5Fgmx-2Dusers&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=ky74TNYxnThRGXSpVRqs65QQ1eaUEC4e_e8yHbJf730&e=  or
>>> send a mail to gmx-users-request at gromacs.org.
>> -- 
>> Gromacs Users mailing list
>> 
>> * Please search the archive at https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists_GMX-2DUsers-5FList&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=kL_5IPwUj2qYwzF8NMs3kdFyk3V4ilYMF4zb2qeYfQs&e=  before posting!
>> 
>> * Can't post? Read https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=Uwy0aXJ0_3T2hPb32VuUWE7wKVw4PrlAmZJH4DzIimc&e= 
>> 
>> * For (un)subscribe requests visit
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__maillist.sys.kth.se_mailman_listinfo_gromacs.org-5Fgmx-2Dusers&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=ky74TNYxnThRGXSpVRqs65QQ1eaUEC4e_e8yHbJf730&e=  or send a mail to gmx-users-request at gromacs.org.
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists_GMX-2DUsers-5FList&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=zrhtPVqyZXIsYX4Xv1X-SFoSQtMtsxjIS7twzw7CrmQ&s=vb0mcrtc1qQej7sVbZ7o3NrRUcXZH2nGT_yawiNScg8&e=  before posting!
> 
> * Can't post? Read https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=zrhtPVqyZXIsYX4Xv1X-SFoSQtMtsxjIS7twzw7CrmQ&s=1tU1--VGvRCPQ0J0uhJsLw7u07Jb_73ocM4UuRc8mqI&e= 
> 
> * For (un)subscribe requests visit
> https://urldefense.proofpoint.com/v2/url?u=https-3A__maillist.sys.kth.se_mailman_listinfo_gromacs.org-5Fgmx-2Dusers&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=zrhtPVqyZXIsYX4Xv1X-SFoSQtMtsxjIS7twzw7CrmQ&s=VgGmxp3ei4UucbUK_l4Tc2H6kXbHpdAQHgjBl5xYXnE&e=  or send a mail to gmx-users-request at gromacs.org.



More information about the gromacs.org_gmx-users mailing list