[gmx-users] Gromacs 2018.5 with CUDA

pbuscemi at q.com pbuscemi at q.com
Thu Jan 31 15:17:45 CET 2019



-----Original Message-----
From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <gromacs.org_gmx-users-bounces at maillist.sys.kth.se> On Behalf Of Szilárd Páll
Sent: Thursday, January 31, 2019 7:06 AM
To: Discussion list for GROMACS users <gmx-users at gromacs.org>
Subject: Re: [gmx-users] Gromacs 2018.5 with CUDA

On Wed, Jan 30, 2019 at 5:14 PM <pbuscemi at q.com> wrote:
>
> Vlad,
>
> 390 is an 'old' driver now.  Try something simple like installing CUDA 410.x see if that resolves the issue.  if you need to update the compiler, g++ -7 may not work, but g++ -6 does.

It is worth checking compatibility first. The GROMACS log file notes the CUDA driver compatibility version and that has to be >= than the CUDA toolkit version.

> Do NOT  install the video driver from the CUDA toolkit however.  If necessary, do that separately from the PPA repository.

Why not? I'd prefer if we avoid strong advise on the list without an explanation and without ensuring that it is the best advice for all use-cases.

====
 Not certain who responded but your comments are well taken and I apologize for the dirth of information. If you use the driver installation from the CUDA toolkit, it will remove your current- a probably newer - driver and force you to go through a rather arduous process of  creating the blacklist for the Nouveau Driver,  temporarily termining Xwindows  etc to install the driver,   see  for example https://gist.github.com/wangruohui/df039f0dc434d6486f5d4d098aa52d07   It far far easier to use the PPA repository 
====

The driver that comes with a CUDA toolkit may often be a bit old, but there is little reason to not use it and you can always download a slightly newer version from the same series (e.g. the CUDA 9.2 toolkit came with 396.26 but the latest version available from the same series is 396.54) from the official website:
https://www.nvidia.com/Download/index.aspx

When you search on the above site it generally spits out the latest version compatible with the hardware and OS selected, but if you want to stick to the same series, you can always get a full list of supported drivers under the "Beta and Older Drivers" link.

My experience with many systems, lots of CUDA installs and versions is this:
As long as you use one and only one source for your drivers, no matter which one you pick in the majority of the cases it just works (as long as you use a compatible CUDA toolkit).
If you install from a repository, keep using that and do _not_ try to install from another source (be it another repo or the binary blobs) without fully uninstalling first. Same goes for the binary blob
drivers: upgrading from one version to the other of using the NVIDIA binary installer is generally fine; however, especially if you are downgrading or want to install from a repository always run the "nvidia-uninstall" script first.


> Paul
>
> -----Original Message-----
> From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se 
> <gromacs.org_gmx-users-bounces at maillist.sys.kth.se> On Behalf Of 
> Benson Muite
> Sent: Wednesday, January 30, 2019 10:05 AM
> To: gmx-users at gromacs.org
> Subject: Re: [gmx-users] Gromacs 2018.5 with CUDA
>
> Hi,
>
> Do you get the same build errors with Gromacs 2019?
>
> What operating system are you using?
>
> What GPU do you have?
>
> Do  you have a newer version of version of GCC?
>
> Benson
>
> On 1/30/19 5:56 PM, Владимир Богданов wrote:
> HI,
>
> Yes, I think, because it seems to be working with nam-cuda right now:
>
> Wed Jan 30 10:39:34 2019
> +-----------------------------------------------------------------------------+
> | NVIDIA-SMI 390.77                 Driver Version: 390.77                    |
> |-------------------------------+----------------------+----------------------+
> | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
> | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
> |===============================+======================+======================|
> |   0  TITAN Xp            Off  | 00000000:65:00.0  On |                  N/A |
> | 53%   83C    P2   175W / 250W |   2411MiB / 12194MiB |     47%      Default |
> +-------------------------------+----------------------+----------------------+
>
> +-----------------------------------------------------------------------------+
> | Processes:                                                       GPU Memory |
> |  GPU       PID   Type   Process name                             Usage      |
> |=============================================================================|
> |    0      1258      G   /usr/lib/xorg/Xorg                            40MiB |
> |    0      1378      G   /usr/bin/gnome-shell                          15MiB |
> |    0      7315      G   /usr/lib/xorg/Xorg                           403MiB |
> |    0      7416      G   /usr/bin/gnome-shell                         284MiB |
> |    0     12510      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
> |    0     12651      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
> |    0     12696      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
> |    0     12737      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
> |    0     12810      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
> |    0     12868      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
> |    0     20688      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   251MiB |
> +-----------------------------------------------------------------------------+
>
> After unsuccesful gromacs run, I ran namd
>
> Best,
>
> Vlad
>
>
> 30.01.2019, 10:59, "Mark Abraham" <mark.j.abraham at gmail.com><mailto:mark.j.abraham at gmail.com>:
>
> Hi,
>
> Does nvidia-smi report that your GPUs are available to use?
>
> Mark
>
> On Wed, 30 Jan 2019 at 07:37 Владимир Богданов 
> <bogdanov-vladimir at yandex.ru<mailto:bogdanov-vladimir at yandex.ru>>
> wrote:
>
>
>  Hey everyone!
>
>  I need help, please. When I try to run MD with GPU I get the next error:
>
>  Command line:
>
>  gmx_mpi mdrun -deffnm md -nb auto
>
>
>
>  Back Off! I just backed up md.log to ./#md  
> <https://vk.com/im?sel=15907114&st=%23md>.log.4#
>
>  NOTE: Detection of GPUs failed. The API reported:
>
>  GROMACS cannot run tasks on a GPU.
>
>  Reading file md.tpr, VERSION 2018.2 (single precision)
>
>  Changing nstlist from 20 to 80, rlist from 1.224 to 1.32
>
>
>
>  Using 1 MPI process
>
>  Using 16 OpenMP threads
>
>
>
>  Back Off! I just backed up md.xtc to ./#md  
> <https://vk.com/im?sel=15907114&st=%23md>.xtc.2#
>
>
>
>  Back Off! I just backed up md.trr to ./#md  
> <https://vk.com/im?sel=15907114&st=%23md>.trr.2#
>
>
>
>  Back Off! I just backed up md.edr to ./#md  
> <https://vk.com/im?sel=15907114&st=%23md>.edr.2#
>
>  starting mdrun 'Protein in water'
>
>  30000000 steps, 60000.0 ps.
>
>  I built gromacs with MPI=on and CUDA=on and the compilation process looked  good. I ran gromacs 2018.2 with CUDA 5 months ago and it worked, but now it  doesn't work.
>
>  Information from *.log file:
>
>  GROMACS version: 2018.2
>
>  Precision: single
>
>  Memory model: 64 bit
>
>  MPI library: MPI
>
>  OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
>
>  GPU support: CUDA
>
>  SIMD instructions: AVX_512
>
>  FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
>
>  RDTSCP usage: enabled
>
>  TNG support: enabled
>
>  Hwloc support: disabled
>
>  Tracing support: disabled
>
>  Built on: 2018-06-24 02:55:16
>
>  Built by: vlad at vlad [CMAKE]
>
>  Build OS/arch: Linux 4.13.0-45-generic x86_64
>
>  Build CPU vendor: Intel
>
>  Build CPU brand: Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz
>
>  Build CPU family: 6 Model: 85 Stepping: 4
>
>  Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw 
> avx512vl  clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr 
> nonstop_tsc pcid  pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm 
> sse2 sse3 sse4.1 sse4.2
>  ssse3 tdt x2apic
>
>  C compiler: /usr/bin/cc GNU 5.4.0
>
>  C compiler flags: -mavx512f -mfma -O3 -DNDEBUG -funroll-all-loops  
> -fexcess-precision=fast
>
>  C++ compiler: /usr/bin/c++ GNU 5.4.0
>
>  C++ compiler flags: -mavx512f -mfma -std=c++11 -O3 -DNDEBUG  
> -funroll-all-loops -fexcess-precision=fast
>
>  CUDA compiler: /usr/local/cuda-9.2/bin/nvcc nvcc: NVIDIA (R) Cuda 
> compiler  driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on  
> Wed_Apr_11_23:16:29_CDT_2018;Cuda compilation tools, release 9.2, 
> V9.2.88
>
>  CUDA compiler
>  
> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,cod
> e=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,c
> ode=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60
> ,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_
> 70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;
> -D_FORCE_INLINES;;  
> ;-mavx512f;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-p
> recision=fast;
>
>  CUDA driver: 9.10
>
>  CUDA runtime: 32.64
>
>
>
>  NOTE: Detection of GPUs failed. The API reported:
>
>  GROMACS cannot run tasks on a GPU.
>
>
>  Any idea what I am doing wrong?
>
>
>  Best,
>  Vlad
>
>  --
>  C уважением, Владимир А. Богданов
>
>  --
>  Gromacs Users mailing list
>
>  * Please search the archive at
>  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before  posting!
>
>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>  * For (un)subscribe requests visit
>  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or  send a mail to gmx-users-request at gromacs.org<mailto:gmx-users-request at gromacs.org>.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org<mailto:gmx-users-request at gromacs.org>.
>
>
>
> --
> C уважением, Владимир А. Богданов
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.



More information about the gromacs.org_gmx-users mailing list