[gmx-developers] Running Gromacs on GPUs on multiple machines
pall.szilard at gmail.com
Thu May 29 18:49:36 CEST 2014
On Thu, May 29, 2014 at 4:52 PM, Vedran Miletić <rivanvx at gmail.com> wrote:
> 2014-05-29 16:36 GMT+02:00 Anders Gabrielsson <andgab at kth.se>:
>> You'll probably have to supply -npernode/-ppn too so that your mpirun
>> doesn't put all MPI ranks on the same host.
>> On 29 May 2014, at 13:17, Vedran Miletić <rivanvx at gmail.com> wrote:
>> Fatal error:
>> Incorrect launch configuration: mismatching number of PP MPI processes
>> and GPUs per node.
>> mdrun_mpi was started with 5 PP MPI processes per node, but you provided 1
> Hi Anders,
> thanks for your response. Weirdly enough, mpirun actually doesn't run
> processes all on one node, it distributes them as equally as possible,
> going around your hostfile in a round-robin fashion. (I verified this
> by running hostname.)
> However, it seems that for some reason Gromacs assumes mpirun does run
> 5 processes on a single node. Regardless, i tried
That can only be the case if:
- your ranks are indeed on the same physical node or;
- if your compute nodes have non-conventianal hostnames which make the
rank splitting into groups fail and mdrun erroneously thinks that
different compute nodes (with different hostnames) have the same
What hostnames do your nodes have?
> mpirun -np 5 -npernode 1 -hostfile ... mdrun_mpi -v -deffnm ... -gpu_id 0
> but it produces the same error. Anything else I could try?
> Gromacs Developers mailing list
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before posting!
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or send a mail to gmx-developers-request at gromacs.org.
More information about the gromacs.org_gmx-developers