[gmx-users] Running MD jobs using slurm on multiple nodes

Mark Abraham mark.j.abraham at gmail.com
Fri Jun 30 00:10:26 CEST 2017


Hi,

Depends how many distinct simulations make sense to run. Normally you can
make use of multiple simulations, so for a system of 20K+ atoms, efficiency
is high if you run one simulation per node, compiled with thread-MPI, and
leave mdrun to manage its own details. Otherwise, you could compile with
MPI and run reasonably efficiently on more nodes, depending on the quality
of the network, specifically the latency of messages and presence of other
users' traffic. Details vary with the network. size of simulation system,
and model physics you use.

Background details at
http://manual.gromacs.org/documentation/2016.3/user-guide/mdrun-performance.html

Mark

On Thu, Jun 29, 2017 at 11:45 PM Thanh Le <thanh.q.le at sjsu.edu> wrote:

> Hi all,
> I am quite new to running MD jobs using slurm on multiple nodes. What
> confuses me is the creation of a slurm script. I don’t quite understand
> what inputs I should use to efficiently run.
> Please teach me how to create a slurm script and the md run command.
> Here are the info of the HPC:
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                28
> On-line CPU(s) list:   0-27
> Thread(s) per core:    1
> Core(s) per socket:    14
> Socket(s):             2
> NUMA node(s):          2
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 79
> Model name:            Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
> Stepping:              1
> CPU MHz:               1200.000
> BogoMIPS:              4795.21
> Virtualization:        VT-x
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              256K
> L3 cache:              35840K
> NUMA node0 CPU(s):     0-13
> NUMA node1 CPU(s):     14-27
> This HPC has about 40 nodes. I have my PI’s permission to use all nodes to
> run as many jobs as I want.
> The version of GROMACS is version 2016.3.
> I am looking forward to hearing from you guys.
> Thanks,
> Thanh Le
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list