[gmx-developers] Running Gromacs on GPUs on multiple machines
Mark Abraham
mark.j.abraham at gmail.com
Thu May 29 11:17:31 CEST 2014
On Thu, May 29, 2014 at 10:39 AM, Vedran Miletić <rivanvx at gmail.com> wrote:
> Hi,
>
> can one use both MPI and CUDA to run Gromacs on multiple machines with
> mutliple GPUs?
Yes
> Wiki page on Acceleration and parallelization [1]
> doesn't handle this particular scenario, and comments on bug 1135 [2]
> suggest GPU acceleration is meant to be used on single machine only. I
> might be missing something, because when I actually try to run it on a
> cluster with 5 machines, 1 GPU per machine like, assuming -gpu_id
> takes local GPU IDs
>
> mpirun -np 5 -hostfile ... mdrun_mpi -v -deffnm ... -gpu_id 00000
>
That's not going to do what you probably think it would do. See (from mdrun
-h) "The argument of -gpu_id is a string of digits (without delimiter)
representing device id-s of the GPUs to be used. For example, ”02”
specifies using GPUs 0 and 2 in the first and second
PP ranks per compute node respectively." The "per compute node" is critical
here. Your -gpu_id requires there will be five PP ranks *per node* to
address. That your MPI hostfile made this possible is a separate question
for you to consider. You sound like you want to use
mpirun -np 5 -hostfile ... mdrun_mpi -v -deffnm ... -gpu_id 0
We do it this way because it is a much more scaleable way of expressing the
underlying requirement that a PP rank on a node maps to a GPU on a node,
when there's more than one GPU per node.
http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Using_multi-simulations_and_GPUs
hints at this, but that page doesn't cover your case. I'll add it.
it seems to run, and nvidia-smi indicates gmx_mpi running a process.
> Very nice! However, the output of mdrun_mpi doesn't seem to be
> correct:
>
> 1 GPU detected on host akston:
> #0: NVIDIA GeForce GTX 660, compute cap.: 3.0, ECC: no, stat: compatible
>
> 1 GPU user-selected for this run.
> Mapping of GPU ID to the 5 PP ranks in this node: 0,0,0,0,0
> NOTE: You assigned GPUs to multiple MPI processes.
>
> I certainly did not assign GPU to multiple MPI processes, because
> there are 5 GPUs which have no info about them printed on screen.
Actually you did such an assignment, as covered above. We only report the
detection from the lowest PP rank on the lowest-ranked node, because we
haven't bothered to serialize the data to check things are sane. Usually
such a machine is sufficiently homogeneous that this is not an issue.
So,
> is mdrun output wrong here and the code just works? Or is it that
> mdrun should warn even harder that this scenario is unsupported?
>
Works fine, but the output could be more specific. Documenting all cases
sanely where someone will find it is hard.
Mark
> Thanks in advance.
>
> Regards,
> Vedran
>
> [1] http://www.gromacs.org/Documentation/Acceleration_and_parallelization
> [2] http://bugzilla.gromacs.org/issues/1135
> --
> Gromacs Developers mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
> or send a mail to gmx-developers-request at gromacs.org.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20140529/fc926c78/attachment-0001.html>
More information about the gromacs.org_gmx-developers
mailing list