[gmx-users] GPU+CPU
Szilárd Páll
pall.szilard at gmail.com
Thu Oct 1 17:49:57 CEST 2015
On Sun, Sep 20, 2015 at 7:11 PM, Parker de Waal <Parker.deWaal at vai.org>
wrote:
> Hello Everyone,
>
> I recently started to explore GROMACS (switching over from AMBER) and need
> some help understanding how to launch GPU+CPU simulations.
>
> GROMACS 5.0.6 was compiled with the following cmake arguments:
> cmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/cm/shared/apps/cuda70/toolkit/7.0.28 -DGMX_MPI=ON
> -DCMAKE_INSTALL_PREFIX=/home/parker.dewaal/applications/gromacs-5.0.6
> -DGMX_GPU=ON
>
> Now I would like to run a simulation on a box with 28 CPU cores and 2
> titan K80s (4 GPU threads), what would be the difference between running
> the following:
>
I assume you mean Tesla K80s.
>
> Mdrun
>
> Mdrun –ntmpi 4 –ntomp 7
>
> Mpirun –np 4 mdrun_mpi
>
The former uses multi-threading based built-in MPI, the latter an external
MPI library that implements ranks as processes. This leads to technical
differences between the two runs and the general outcome is not trivial to
predict. Typically the thread-MPI based run will be slightly more
efficient, but with modern MPI implementations the performance should be
close.
Cheers,
--
Szilárd
>
> Cheers,
> Parker
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
More information about the gromacs.org_gmx-users
mailing list