[gmx-users] Using Gpus on multiple nodes. (Feature #1591)

Siva Dasetty sdasett at g.clemson.edu
Tue Oct 14 23:42:54 CEST 2014


Thank you Mark for the reply,

We use pbs for submitting jobs on our cluster and this is how I request the
nodes and processors

#PBS -l
select=2:ncpus=8:mem=8gb:mpiprocs=8:ngpus=2:gpu_model=k20:interconnect=fdr


Do you think the problem could be with the way I installed mdrun using Open
MPI?


Can you please suggest the missing environmental settings that I may need
to include in the job script in order for the MPI to consider 2 ranks on
one node?


Thank you for your time.



On Tue, Oct 14, 2014 at 5:20 PM, Mark Abraham <mark.j.abraham at gmail.com>
wrote:

> On Tue, Oct 14, 2014 at 10:51 PM, Siva Dasetty <sdasett at g.clemson.edu>
> wrote:
>
> > Dear All,
> >
> > I am currently able to run simulation on a single node containing 2 gpus,
> > but I get the following fatal error when I try to run the simulation
> using
> > multiple gpus (2 on each node) on multiple nodes (2 for example) using
> OPEN
> > MPI.
> >
>
> Here you say you want 2 ranks on each of two nodes...
>
>
> > Fatal error:
> >
> > Incorrect launch configuration: mismatching number of PP MPI processes
> and
> > GPUs
> >
> > per node.
> >
> > mdrun was started with 4 PP MPI processes per node,
>
>
> ... but here mdrun means what it says...
>
>
> > but you provided only 2
> > GPUs.
> >
> > The command I used to run the simulation is
> >
> > mpirun -np 4 mdrun  -s <tpr file>  -deffnm <...>  -gpu_id 01
> >
>
> ... which means your MPI environment (hostfile, job script settings,
> whatever) doesn't have the settings you think it does, since it's putting
> all 4 ranks on one node.
>
> Mark
>
>
> >
> >
> > However It at least runs if I use the following command,
> >
> >
> > mpirun -np 4 mdrun  -s <tpr file>  -deffnm <...>  -gpu_id 0011
> >
> >
> > But after referring to the following thread, I highly doubt if I am using
> > all the 4 gpus available in the 2 nodes combined.
> >
> >
> >
> >
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-developers/2014-May/007682.html
> >
> >
> >
> > Thank you for your help in advance,
> >
> > --
> > Siva
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>



-- 
Siva


More information about the gromacs.org_gmx-users mailing list