[gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration
mark.j.abraham at gmail.com
Tue Jun 4 17:59:14 CEST 2013
Yes, documentation and output is not optimal. Resources are limited, sorry.
Some of these issues are discussed in
http://bugzilla.gromacs.org/issues/1135. The good news is that it sounds
like you are having a non-problem. The output tacitly assumes
heterogeneity. If your performance results are linear over small numbers of
nodes, then you're doing fine.
On Tue, Jun 4, 2013 at 3:31 PM, João Henriques <
joao.henriques.32353 at gmail.com> wrote:
> Dear all,
> Since gmx-4.6 came out, I've been particularly interested in taking
> advantage of the native GPU acceleration for my simulations. Luckily, I
> have access to a cluster with the following specs PER NODE:
> 2 E5-2650 (2.0 Ghz, 8-core)
> 2 Nvidia K20
> I've become quite familiar with the "heterogenous parallelization" and
> "multiple MPI ranks per GPU" schemes on a SINGLE NODE. Everything works
> fine, no problems at all.
> Currently, I'm working with a nasty system comprising 608159 tip3p water
> molecules and it would really help to accelerate things up a bit.
> Therefore, I would really like to try to parallelize my system over
> multiple nodes and keep the GPU acceleration.
> I've tried many different command combinations, but mdrun seems to be blind
> towards the GPUs existing on other nodes. It always finds GPUs #0 and #1 on
> the first node and tries to fit everything into these, completely
> disregarding the existence of the other GPUs on the remaining requested
> Once again, note that all nodes have exactly the same specs.
> Literature on the official gmx website is not, well... you know... in-depth
> and I would really appreciate if someone could shed some light into this
> Thank you,
> Best regards,
> João Henriques
> gmx-users mailing list gmx-users at gromacs.org
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
More information about the gromacs.org_gmx-users