[gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

João Henriques joao.henriques.32353 at gmail.com
Wed Jun 5 13:00:06 CEST 2013


Thank you very much for both contributions. I will conduct some tests to
assess which approach works best for my system.

Much appreciated,
Best regards,
João Henriques


On Tue, Jun 4, 2013 at 6:30 PM, Szilárd Páll <szilard.pall at cbr.su.se> wrote:

> mdrun is not blind, just the current design does report the hardware
> of all compute nodes used. Whatever CPU/GPU hardware mdrun reports in
> the log/std output is *only* what rank 0, i.e. the first MPI process,
> detects. If you have a heterogeneous hardware configuration, in most
> cases you should be able to run just fine, but you'll still get only
> the hardware the first rank sits on reported.
>
> Hence, if you want to run on 5 of the nodes you mention, you just do:
> mpirun -np 10 mdrun_mpi [-gpu_id 01]
>
> You may want to try both -ntomp 8 and -ntomp 16 (using HyperThreading
> does not always help).
>
> Also note that if you use GPU sharing among ranks (in order to use <8
> threads/rank), (for some technical reasons) disabling dynamic load
> balancing may help - especially if you have a homogenous simulation
> system (and hardware setup).
>
>
> Cheers,
> --
> Szilárd
>
>
> On Tue, Jun 4, 2013 at 3:31 PM, João Henriques
> <joao.henriques.32353 at gmail.com> wrote:
> > Dear all,
> >
> > Since gmx-4.6 came out, I've been particularly interested in taking
> > advantage of the native GPU acceleration for my simulations. Luckily, I
> > have access to a cluster with the following specs PER NODE:
> >
> > CPU
> > 2 E5-2650 (2.0 Ghz, 8-core)
> >
> > GPU
> > 2 Nvidia K20
> >
> > I've become quite familiar with the "heterogenous parallelization" and
> > "multiple MPI ranks per GPU" schemes on a SINGLE NODE. Everything works
> > fine, no problems at all.
> >
> > Currently, I'm working with a nasty system comprising 608159 tip3p water
> > molecules and it would really help to accelerate things up a bit.
> > Therefore, I would really like to try to parallelize my system over
> > multiple nodes and keep the GPU acceleration.
> >
> > I've tried many different command combinations, but mdrun seems to be
> blind
> > towards the GPUs existing on other nodes. It always finds GPUs #0 and #1
> on
> > the first node and tries to fit everything into these, completely
> > disregarding the existence of the other GPUs on the remaining requested
> > nodes.
> >
> > Once again, note that all nodes have exactly the same specs.
> >
> > Literature on the official gmx website is not, well... you know...
> in-depth
> > and I would really appreciate if someone could shed some light into this
> > subject.
> >
> > Thank you,
> > Best regards,
> >
> > --
> > João Henriques
> > --
> > gmx-users mailing list    gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
João Henriques



More information about the gromacs.org_gmx-users mailing list