[gmx-users] parallel processing

Mark Abraham mark.j.abraham at gmail.com
Thu Dec 8 17:15:04 CET 2016


Hi,

On Sat, Dec 3, 2016 at 6:34 PM abhisek Mondal <abhisek.mndl at gmail.com>
wrote:

> Hello Mark,
>
> I had gone through the page. Not finding any solution there.
>
> My architecture is:
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                16
> On-line CPU(s) list:   0-15
> Thread(s) per core:    1
> Core(s) per socket:    8
> Socket(s):             2
> NUMA node(s):          2
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 45
> Stepping:              7
> CPU MHz:               2593.616
> BogoMIPS:              5186.82
> Virtualization:        VT-x
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              256K
> L3 cache:              20480K
> NUMA node0 CPU(s):     0-7
> NUMA node1 CPU(s):     8-15
>
>
> When i put "mpirun -np 64 gmx_mpi mdrun  -v -deffnm npt" it just runs on a
> single node using 64 threads (16 availavle for a single node normally).
>
> How am I supposed to do node distribution efficiently ? I'm trying to run
> the job using 4 nodes and 16 threads.
> A little suggestion would be highly appreciated.
>

You read the documentation for your MPI library and e.g. set up a suitable
hostfile so that mpirun knows which nodes to use. mdrun doesn't know or
care how mpirun works, but mpirun has to be set up to work properly :-)

Mark


>
>
> On Fri, Dec 2, 2016 at 4:55 PM, Mark Abraham <mark.j.abraham at gmail.com>
> wrote:
>
> > Hi,
> >
> > Please check out
> > http://manual.gromacs.org/documentation/2016.1/user-
> > guide/mdrun-performance.html.
> > Your first sample command is woefully inefficient, but for em this likely
> > doesn't matter.
> >
> > Mark
> >
> > On Fri, 2 Dec 2016 09:01 abhisek Mondal <abhisek.mndl at gmail.com> wrote:
> >
> > > But if I want to run the same job in 4 nodes (available cores= 4*16)
> then
> > > how would this work ?
> > >
> > > On Fri, Dec 2, 2016 at 2:20 PM, <jkrieger at mrc-lmb.cam.ac.uk> wrote:
> > >
> > > > Hi Abhisek,
> > > >
> > > > You would need to use another version of gromacs with mpi rather than
> > > > thread-mpi (installed by adding DGMX_MPI=ON to the cmake command).
> You
> > > > could then use the following command
> > > >
> > > > mpirun -np 4 gmx_mpi mdrun -ntomp 16 -npme 0 -v -deffnm em
> > > >
> > > > I'm not sure why you are specifying -npme 0 but would suggest you
> don't
> > > do
> > > > this and instead let the number of separate pme ranks be set
> > > > automatically.
> > > >
> > > > Best wishes
> > > > James
> > > >
> > > > > Hi,
> > > > >
> > > > > I'm running gromacs on a cluster configuration as follows:
> > > > > 1 node = 16 cores
> > > > >
> > > > > I'm able to use single node with "gmx mdrun -ntmpi 4 -ntomp 16
> -npme
> > 0
> > > -v
> > > > > -deffnm em" command.
> > > > >
> > > > > How can I be able to run on multiple node (I have 20 nodes
> > available) ?
> > > > > "-nt" is not doing good here.
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Abhisek Mondal
> > > > >
> > > > > *Research Fellow*
> > > > >
> > > > > *Structural Biology and Bioinformatics Division*
> > > > > *CSIR-Indian Institute of Chemical Biology*
> > > > >
> > > > > *Kolkata 700032*
> > > > >
> > > > > *INDIA*
> > > > > --
> > > > > Gromacs Users mailing list
> > > > >
> > > > > * Please search the archive at
> > > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > > posting!
> > > > >
> > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > >
> > > > > * For (un)subscribe requests visit
> > > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > or
> > > > send
> > > > > a mail to gmx-users-request at gromacs.org.
> > > > >
> > > >
> > > >
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/
> > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-request at gromacs.org.
> > > >
> > >
> > >
> > >
> > > --
> > > Abhisek Mondal
> > >
> > > *Research Fellow*
> > >
> > > *Structural Biology and Bioinformatics Division*
> > > *CSIR-Indian Institute of Chemical Biology*
> > >
> > > *Kolkata 700032*
> > >
> > > *INDIA*
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
>
>
>
> --
> Abhisek Mondal
>
> *Research Fellow*
>
> *Structural Biology and Bioinformatics Division*
> *CSIR-Indian Institute of Chemical Biology*
>
> *Kolkata 700032*
>
> *INDIA*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list