[gmx-users] GPU job failed
Yunlong Liu
yliu120 at jhmi.edu
Mon Sep 8 23:59:50 CEST 2014
Same idea with Szilard.
How many nodes are you using?
On one nodes, how many MPI ranks do you have? The error is complaining about you assigned two GPUs to only one MPI process on one node. If you spread your two MPI ranks on two nodes, that means you only have one at each. Then you can't assign two GPU for only one MPI rank.
How many GPU do you have on one node? If there are two, you can either launch two PPMPI processes on one node and assign two GPU for them. If you only want to launch one MPI rank on each node, you can assign only one GPU for each node ( by -gpu_id 0 )
Yunlong
Try to run
Sent from my iPhone
> On Sep 8, 2014, at 5:35 PM, "Szilárd Páll" <pall.szilard at gmail.com> wrote:
>
> Hi,
>
> It looks like you're starting two ranks and passing two GPU IDs so it
> should work. The only think I can think of is that you are either
> getting the two MPI ranks placed on different nodes or that for some
> reason "mpirun -np 2" is only starting one rank (MPI installation
> broken?).
>
> Does the same setup work with thread-MPI?
>
> Cheers,
> --
> Szilárd
>
>
>> On Mon, Sep 8, 2014 at 2:50 PM, Albert <mailmd2011 at gmail.com> wrote:
>> Hello:
>>
>> I am trying to use the following command in Gromacs-5.0.1:
>>
>> mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g npt2.log
>> -gpu_id 01 -ntomp 10
>>
>>
>> but it always failed with messages:
>>
>>
>> 2 GPUs detected on host cudaB:
>> #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat:
>> compatible
>> #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat:
>> compatible
>>
>> 2 GPUs user-selected for this run.
>> Mapping of GPUs to the 1 PP rank in this node: #0, #1
>>
>>
>> -------------------------------------------------------
>> Program mdrun_mpi, VERSION 5.0.1
>> Source code file:
>> /soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c,
>> line: 359
>>
>> Fatal error:
>> Incorrect launch configuration: mismatching number of PP MPI processes and
>> GPUs per node.
>> mdrun_mpi was started with 1 PP MPI process per node, but you provided 2
>> GPUs.
>> For more information and tips for troubleshooting, please check the GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>>
>>
>>
>> However, this command works fine in Gromacs-4.6.5, and I don't know why it
>> failed in 5.0.1. Does anybody have any idea?
>>
>> thx a lot
>>
>> Albert
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
>> mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list