[gmx-users] gromacs.org_gmx-users Digest, Vol 158, Issue 186

Mark Abraham mark.j.abraham at gmail.com
Fri Jun 30 00:38:07 CEST 2017


Hi,

Don't even consider running on more than one node. You can see for yourself
by comparing the performance of even just e.g.

gmx mdrun -nt 1 -pin on
gmx mdrun -nt 2 -pin on
gmx mdrun -nt 14 -pin on
gmx mdrun -nt 28 -pin on

... to run on different numbers of cores. Parallel efficiency drops off as
you approach 100 atoms per core.

Further, the factor of seven in the core count is a surefire way to be
inefficient, because the domain decomposition will have to partition in
seven domains in one direction. I would consider running three simulations
per node, with 9,10,9 cores per simulation, using gmx mdrun -nt x -pin on
-pin_offset y for suitable x and y. But try the above experiment first.

Mark

On Fri, Jun 30, 2017 at 12:21 AM Thanh Le <thanh.q.le at sjsu.edu> wrote:

> Hi Mr. Abraham.
> My system is quite small, only about 8000 atoms. I have run this system
> for 100 ns, which took roughly about 2 days. Hence, a run of 1 microsecond
> would take about 20 days. I am trying to shorten it down to 2 days by using
> more than 1 node.
> Thanks,
> Thanh Le
> > On Jun 29, 2017, at 3:10 PM,
> gromacs.org_gmx-users-request at maillist.sys.kth.se wrote:
> >
> >> http://www.gromacs.org/Support/Mailing_Lists <
> http://www.gromacs.org/Support/Mailing_Lists>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list