[gmx-users] Commands to run simulations using multiple GPU's in version 5.0.1

Siva Dasetty sdasett at g.clemson.edu
Wed Sep 24 05:07:52 CEST 2014


Thank you Lu for the reply.

As I have mentioned in the post, I have already tried those options but it
didn't work. Kindly please let me know if you have anymore suggestions.

Thank you,

On Tue, Sep 23, 2014 at 8:41 PM, Johnny Lu <johnny.lu128 at gmail.com> wrote:

> Try -nt, -ntmpi, -ntomp, -np  (one at a time) ?
> I forget about what I tried now.... But I just stop the mdrun, and then
> read the log file.
> Also can look for the mdrun page in the offical manual (pdf) and try this
> page:
>
> http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun
>
>
> On Mon, Sep 22, 2014 at 6:46 PM, Siva Dasetty <sdasett at g.clemson.edu>
> wrote:
>
> > Dear  All,
> >
> > I am trying to run NPT simulations using GROMACS version 5.0.1 of a
> system
> > of size 140k atoms (protein+water systems) with 2 or more GPU's
> > (model=k20); 8 cores (or more); and 1 or more nodes. I am trying to
> > understand how to run simulations using multiple gpus on more than one
> > node. I  get the following errors/output when I run the simulation using
> > the following commands:-
> >
> > Note: time-step used = 2 fs and total number of steps = 20000
> >
> > First 4 cases are using single GPU and cases 5-8 are using 2 GPU's.
> >
> > 1. 1 node, 8 cpus, 1 gpu
> > export OMP_NUM_THREADS = 8
> > command used-  mdrun -s topol.tpr  -gpu_id 0
> > Speed - 5.8 ns/day
> >
> > 2.  1 node, 8 cpus, 1 gpu
> > export OMP_NUM_THREADS = 16
> > command used-  mdrun -s topol.tpr   -gpu_id 0
> > Speed - 4.7 ns/day
> >
> > 3. 1 node, 8cpus, 1gpu
> > mdrun -s topol.tpr -ntomp 8  -gpu_id 0
> > Speed- 5.876 ns/day
> >
> > 4. 1 node, 8cpus, 1gpu
> > mdrun -s topol.tpr -ntomp 16  -gpu_id 0
> > Fatal Error: Environment variable OMP_NUM_THREADS (8) and the number of
> > threads requested on  the command line (16) have different values. Either
> > omit one, or set them both
> >  to the same value.
> >
> > Question for 3 and 4 : Do I need to always use OMP_NUM_THREADS or is
> there
> > a way ntomp overwrites the environment settings?
> >
> >
> > 5. 1 node, 8cpus , 2gpus
> > export OMP_NUM_THREADS = 8
> > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> > Speed - 4.044 ns/day
> >
> > 6. 2 nodes, 8cpus , 2 gpus
> > export OMP_NUM_THREADS = 8
> > mpirun -np 2 mdrun -s topol.tpr -pin on  -gpu_id 01
> > Speed - 3.0 ns/day
> >
> > Are the commands that I used for 5 and 6 correct?
> >
> > 7. I also used (1node, 8 cpus, 2 gpus)
> >  mdrun -s topol.tpr -ntmpi 2 -ntomp 8  -gpu_id 01
> > but this time I get a fatal error: thread mpi's are requested but gromacs
> > is not compiled with thread MPI.
> >
> > Question: Isn't thread MPI enabled by default?
> >
> > 8. Finally, I recompiled Gromacs without OpenMP and re-ran case 1 but
> this
> > time there is a fatal error "More than 1 OpenMP thread requested, but
> > Gromacs was compiled without OpenMP support."
> > command : mdrun -s topol.tpr  (no environment settings)  -gpu_id 0
> > Question: Here again, I assumed thread MPI is enabled by default and I
> > think Gromacs still assumes OpenMp thread settings. Am i doing something
> > wrong here?
> >
> > Thanks in advance for your help
> >
> > --
> > Siva
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>



-- 
Siva


More information about the gromacs.org_gmx-users mailing list